Google Cloud Ai Platform V1 Client - Class BatchPredictionJob (0.26.2)

Reference documentation and code samples for the Google Cloud Ai Platform V1 Client class BatchPredictionJob.

A job that uses a Model to produce predictions on multiple input instances. If predictions for significant portion of the instances fail, the job may finish without attempting predictions for all remaining instances.

Generated from protobuf message google.cloud.aiplatform.v1.BatchPredictionJob

Namespace

Google \ Cloud \ AIPlatform \ V1

Methods

__construct

Constructor.

Parameters
NameDescription
data array

Optional. Data for populating the Message object.

↳ name string

Output only. Resource name of the BatchPredictionJob.

↳ display_name string

Required. The user-defined name of this BatchPredictionJob.

↳ model string

The name of the Model resource that produces the predictions via this job, must share the same ancestor Location. Starting this job has no impact on any existing deployments of the Model and their resources. Exactly one of model and unmanaged_container_model must be set. The model resource name may contain version id or version alias to specify the version. Example: projects/{project}/locations/{location}/models/{model}@2 or projects/{project}/locations/{location}/models/{model}@golden if no version is specified, the default version will be deployed. The model resource could also be a publisher model. Example: publishers/{publisher}/models/{model} or projects/{project}/locations/{location}/publishers/{publisher}/models/{model}

↳ model_version_id string

Output only. The version ID of the Model that produces the predictions via this job.

↳ unmanaged_container_model Google\Cloud\AIPlatform\V1\UnmanagedContainerModel

Contains model information necessary to perform batch prediction without requiring uploading to model registry. Exactly one of model and unmanaged_container_model must be set.

↳ input_config Google\Cloud\AIPlatform\V1\BatchPredictionJob\InputConfig

Required. Input configuration of the instances on which predictions are performed. The schema of any single instance may be specified via the Model's PredictSchemata's instance_schema_uri.

↳ instance_config Google\Cloud\AIPlatform\V1\BatchPredictionJob\InstanceConfig

Configuration for how to convert batch prediction input instances to the prediction instances that are sent to the Model.

↳ model_parameters Google\Protobuf\Value

The parameters that govern the predictions. The schema of the parameters may be specified via the Model's PredictSchemata's parameters_schema_uri.

↳ output_config Google\Cloud\AIPlatform\V1\BatchPredictionJob\OutputConfig

Required. The Configuration specifying where output predictions should be written. The schema of any single prediction may be specified as a concatenation of Model's PredictSchemata's instance_schema_uri and prediction_schema_uri.

↳ dedicated_resources Google\Cloud\AIPlatform\V1\BatchDedicatedResources

The config of resources used by the Model during the batch prediction. If the Model supports DEDICATED_RESOURCES this config may be provided (and the job will use these resources), if the Model doesn't support AUTOMATIC_RESOURCES, this config must be provided.

↳ service_account string

The service account that the DeployedModel's container runs as. If not specified, a system generated one will be used, which has minimal permissions and the custom container, if used, may not have enough permission to access other Google Cloud resources. Users deploying the Model must have the iam.serviceAccounts.actAs permission on this service account.

↳ manual_batch_tuning_parameters Google\Cloud\AIPlatform\V1\ManualBatchTuningParameters

Immutable. Parameters configuring the batch behavior. Currently only applicable when dedicated_resources are used (in other cases Vertex AI does the tuning itself).

↳ generate_explanation bool

Generate explanation with the batch prediction results. When set to true, the batch prediction output changes based on the predictions_format field of the BatchPredictionJob.output_config object: * bigquery: output includes a column named explanation. The value is a struct that conforms to the Explanation object. * jsonl: The JSON objects on each line include an additional entry keyed explanation. The value of the entry is a JSON object that conforms to the Explanation object. * csv: Generating explanations for CSV format is not supported. If this field is set to true, either the Model.explanation_spec or explanation_spec must be populated.

↳ explanation_spec Google\Cloud\AIPlatform\V1\ExplanationSpec

Explanation configuration for this BatchPredictionJob. Can be specified only if generate_explanation is set to true. This value overrides the value of Model.explanation_spec. All fields of explanation_spec are optional in the request. If a field of the explanation_spec object is not populated, the corresponding field of the Model.explanation_spec object is inherited.

↳ output_info Google\Cloud\AIPlatform\V1\BatchPredictionJob\OutputInfo

Output only. Information further describing the output of this job.

↳ state int

Output only. The detailed state of the job.

↳ error Google\Rpc\Status

Output only. Only populated when the job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.

↳ partial_failures array<Google\Rpc\Status>

Output only. Partial failures encountered. For example, single files that can't be read. This field never exceeds 20 entries. Status details fields contain standard Google Cloud error details.

↳ resources_consumed Google\Cloud\AIPlatform\V1\ResourcesConsumed

Output only. Information about resources that had been consumed by this job. Provided in real time at best effort basis, as well as a final value once the job completes. Note: This field currently may be not populated for batch predictions that use AutoML Models.

↳ completion_stats Google\Cloud\AIPlatform\V1\CompletionStats

Output only. Statistics on completed and failed prediction instances.

↳ create_time Google\Protobuf\Timestamp

Output only. Time when the BatchPredictionJob was created.

↳ start_time Google\Protobuf\Timestamp

Output only. Time when the BatchPredictionJob for the first time entered the JOB_STATE_RUNNING state.

↳ end_time Google\Protobuf\Timestamp

Output only. Time when the BatchPredictionJob entered any of the following states: JOB_STATE_SUCCEEDED, JOB_STATE_FAILED, JOB_STATE_CANCELLED.

↳ update_time Google\Protobuf\Timestamp

Output only. Time when the BatchPredictionJob was most recently updated.

↳ labels array|Google\Protobuf\Internal\MapField

The labels with user-defined metadata to organize BatchPredictionJobs. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.

↳ encryption_spec Google\Cloud\AIPlatform\V1\EncryptionSpec

Customer-managed encryption key options for a BatchPredictionJob. If this is set, then all resources created by the BatchPredictionJob will be encrypted with the provided encryption key.

↳ disable_container_logging bool

For custom-trained Models and AutoML Tabular Models, the container of the DeployedModel instances will send stderr and stdout streams to Cloud Logging by default. Please note that the logs incur cost, which are subject to Cloud Logging pricing. User can disable container logging by setting this flag to true.

getName

Output only. Resource name of the BatchPredictionJob.

Returns
TypeDescription
string

setName

Output only. Resource name of the BatchPredictionJob.

Parameter
NameDescription
var string
Returns
TypeDescription
$this

getDisplayName

Required. The user-defined name of this BatchPredictionJob.

Returns
TypeDescription
string

setDisplayName

Required. The user-defined name of this BatchPredictionJob.

Parameter
NameDescription
var string
Returns
TypeDescription
$this

getModel

The name of the Model resource that produces the predictions via this job, must share the same ancestor Location.

Starting this job has no impact on any existing deployments of the Model and their resources. Exactly one of model and unmanaged_container_model must be set. The model resource name may contain version id or version alias to specify the version. Example: projects/{project}/locations/{location}/models/{model}@2 or projects/{project}/locations/{location}/models/{model}@golden if no version is specified, the default version will be deployed. The model resource could also be a publisher model. Example: publishers/{publisher}/models/{model} or projects/{project}/locations/{location}/publishers/{publisher}/models/{model}

Returns
TypeDescription
string

setModel

The name of the Model resource that produces the predictions via this job, must share the same ancestor Location.

Starting this job has no impact on any existing deployments of the Model and their resources. Exactly one of model and unmanaged_container_model must be set. The model resource name may contain version id or version alias to specify the version. Example: projects/{project}/locations/{location}/models/{model}@2 or projects/{project}/locations/{location}/models/{model}@golden if no version is specified, the default version will be deployed. The model resource could also be a publisher model. Example: publishers/{publisher}/models/{model} or projects/{project}/locations/{location}/publishers/{publisher}/models/{model}

Parameter
NameDescription
var string
Returns
TypeDescription
$this

getModelVersionId

Output only. The version ID of the Model that produces the predictions via this job.

Returns
TypeDescription
string

setModelVersionId

Output only. The version ID of the Model that produces the predictions via this job.

Parameter
NameDescription
var string
Returns
TypeDescription
$this

getUnmanagedContainerModel

Contains model information necessary to perform batch prediction without requiring uploading to model registry.

Exactly one of model and unmanaged_container_model must be set.

Returns
TypeDescription
Google\Cloud\AIPlatform\V1\UnmanagedContainerModel|null

hasUnmanagedContainerModel

clearUnmanagedContainerModel

setUnmanagedContainerModel

Contains model information necessary to perform batch prediction without requiring uploading to model registry.

Exactly one of model and unmanaged_container_model must be set.

Parameter
NameDescription
var Google\Cloud\AIPlatform\V1\UnmanagedContainerModel
Returns
TypeDescription
$this

getInputConfig

Required. Input configuration of the instances on which predictions are performed. The schema of any single instance may be specified via the Model's PredictSchemata's instance_schema_uri.

Returns
TypeDescription
Google\Cloud\AIPlatform\V1\BatchPredictionJob\InputConfig|null

hasInputConfig

clearInputConfig

setInputConfig

Required. Input configuration of the instances on which predictions are performed. The schema of any single instance may be specified via the Model's PredictSchemata's instance_schema_uri.

Parameter
NameDescription
var Google\Cloud\AIPlatform\V1\BatchPredictionJob\InputConfig
Returns
TypeDescription
$this

getInstanceConfig

Configuration for how to convert batch prediction input instances to the prediction instances that are sent to the Model.

Returns
TypeDescription
Google\Cloud\AIPlatform\V1\BatchPredictionJob\InstanceConfig|null

hasInstanceConfig

clearInstanceConfig

setInstanceConfig

Configuration for how to convert batch prediction input instances to the prediction instances that are sent to the Model.

Parameter
NameDescription
var Google\Cloud\AIPlatform\V1\BatchPredictionJob\InstanceConfig
Returns
TypeDescription
$this

getModelParameters

The parameters that govern the predictions. The schema of the parameters may be specified via the Model's PredictSchemata's parameters_schema_uri.

Returns
TypeDescription
Google\Protobuf\Value|null

hasModelParameters

clearModelParameters

setModelParameters

The parameters that govern the predictions. The schema of the parameters may be specified via the Model's PredictSchemata's parameters_schema_uri.

Parameter
NameDescription
var Google\Protobuf\Value
Returns
TypeDescription
$this

getOutputConfig

Required. The Configuration specifying where output predictions should be written.

The schema of any single prediction may be specified as a concatenation of Model's PredictSchemata's instance_schema_uri and prediction_schema_uri.

Returns
TypeDescription
Google\Cloud\AIPlatform\V1\BatchPredictionJob\OutputConfig|null

hasOutputConfig

clearOutputConfig

setOutputConfig

Required. The Configuration specifying where output predictions should be written.

The schema of any single prediction may be specified as a concatenation of Model's PredictSchemata's instance_schema_uri and prediction_schema_uri.

Parameter
NameDescription
var Google\Cloud\AIPlatform\V1\BatchPredictionJob\OutputConfig
Returns
TypeDescription
$this

getDedicatedResources

The config of resources used by the Model during the batch prediction. If the Model supports DEDICATED_RESOURCES this config may be provided (and the job will use these resources), if the Model doesn't support AUTOMATIC_RESOURCES, this config must be provided.

Returns
TypeDescription
Google\Cloud\AIPlatform\V1\BatchDedicatedResources|null

hasDedicatedResources

clearDedicatedResources

setDedicatedResources

The config of resources used by the Model during the batch prediction. If the Model supports DEDICATED_RESOURCES this config may be provided (and the job will use these resources), if the Model doesn't support AUTOMATIC_RESOURCES, this config must be provided.

Parameter
NameDescription
var Google\Cloud\AIPlatform\V1\BatchDedicatedResources
Returns
TypeDescription
$this

getServiceAccount

The service account that the DeployedModel's container runs as. If not specified, a system generated one will be used, which has minimal permissions and the custom container, if used, may not have enough permission to access other Google Cloud resources.

Users deploying the Model must have the iam.serviceAccounts.actAs permission on this service account.

Returns
TypeDescription
string

setServiceAccount

The service account that the DeployedModel's container runs as. If not specified, a system generated one will be used, which has minimal permissions and the custom container, if used, may not have enough permission to access other Google Cloud resources.

Users deploying the Model must have the iam.serviceAccounts.actAs permission on this service account.

Parameter
NameDescription
var string
Returns
TypeDescription
$this

getManualBatchTuningParameters

Immutable. Parameters configuring the batch behavior. Currently only applicable when dedicated_resources are used (in other cases Vertex AI does the tuning itself).

Returns
TypeDescription
Google\Cloud\AIPlatform\V1\ManualBatchTuningParameters|null

hasManualBatchTuningParameters

clearManualBatchTuningParameters

setManualBatchTuningParameters

Immutable. Parameters configuring the batch behavior. Currently only applicable when dedicated_resources are used (in other cases Vertex AI does the tuning itself).

Parameter
NameDescription
var Google\Cloud\AIPlatform\V1\ManualBatchTuningParameters
Returns
TypeDescription
$this

getGenerateExplanation

Generate explanation with the batch prediction results.

When set to true, the batch prediction output changes based on the predictions_format field of the BatchPredictionJob.output_config object:

  • bigquery: output includes a column named explanation. The value is a struct that conforms to the Explanation object.
  • jsonl: The JSON objects on each line include an additional entry keyed explanation. The value of the entry is a JSON object that conforms to the Explanation object.
  • csv: Generating explanations for CSV format is not supported. If this field is set to true, either the Model.explanation_spec or explanation_spec must be populated.
Returns
TypeDescription
bool

setGenerateExplanation

Generate explanation with the batch prediction results.

When set to true, the batch prediction output changes based on the predictions_format field of the BatchPredictionJob.output_config object:

  • bigquery: output includes a column named explanation. The value is a struct that conforms to the Explanation object.
  • jsonl: The JSON objects on each line include an additional entry keyed explanation. The value of the entry is a JSON object that conforms to the Explanation object.
  • csv: Generating explanations for CSV format is not supported. If this field is set to true, either the Model.explanation_spec or explanation_spec must be populated.
Parameter
NameDescription
var bool
Returns
TypeDescription
$this

getExplanationSpec

Explanation configuration for this BatchPredictionJob. Can be specified only if generate_explanation is set to true.

This value overrides the value of Model.explanation_spec. All fields of explanation_spec are optional in the request. If a field of the explanation_spec object is not populated, the corresponding field of the Model.explanation_spec object is inherited.

Returns
TypeDescription
Google\Cloud\AIPlatform\V1\ExplanationSpec|null

hasExplanationSpec

clearExplanationSpec

setExplanationSpec

Explanation configuration for this BatchPredictionJob. Can be specified only if generate_explanation is set to true.

This value overrides the value of Model.explanation_spec. All fields of explanation_spec are optional in the request. If a field of the explanation_spec object is not populated, the corresponding field of the Model.explanation_spec object is inherited.

Parameter
NameDescription
var Google\Cloud\AIPlatform\V1\ExplanationSpec
Returns
TypeDescription
$this

getOutputInfo

Output only. Information further describing the output of this job.

Returns
TypeDescription
Google\Cloud\AIPlatform\V1\BatchPredictionJob\OutputInfo|null

hasOutputInfo

clearOutputInfo

setOutputInfo

Output only. Information further describing the output of this job.

Parameter
NameDescription
var Google\Cloud\AIPlatform\V1\BatchPredictionJob\OutputInfo
Returns
TypeDescription
$this

getState

Output only. The detailed state of the job.

Returns
TypeDescription
int

setState

Output only. The detailed state of the job.

Parameter
NameDescription
var int
Returns
TypeDescription
$this

getError

Output only. Only populated when the job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.

Returns
TypeDescription
Google\Rpc\Status|null

hasError

clearError

setError

Output only. Only populated when the job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.

Parameter
NameDescription
var Google\Rpc\Status
Returns
TypeDescription
$this

getPartialFailures

Output only. Partial failures encountered.

For example, single files that can't be read. This field never exceeds 20 entries. Status details fields contain standard Google Cloud error details.

Returns
TypeDescription
Google\Protobuf\Internal\RepeatedField

setPartialFailures

Output only. Partial failures encountered.

For example, single files that can't be read. This field never exceeds 20 entries. Status details fields contain standard Google Cloud error details.

Parameter
NameDescription
var array<Google\Rpc\Status>
Returns
TypeDescription
$this

getResourcesConsumed

Output only. Information about resources that had been consumed by this job. Provided in real time at best effort basis, as well as a final value once the job completes.

Note: This field currently may be not populated for batch predictions that use AutoML Models.

Returns
TypeDescription
Google\Cloud\AIPlatform\V1\ResourcesConsumed|null

hasResourcesConsumed

clearResourcesConsumed

setResourcesConsumed

Output only. Information about resources that had been consumed by this job. Provided in real time at best effort basis, as well as a final value once the job completes.

Note: This field currently may be not populated for batch predictions that use AutoML Models.

Parameter
NameDescription
var Google\Cloud\AIPlatform\V1\ResourcesConsumed
Returns
TypeDescription
$this

getCompletionStats

Output only. Statistics on completed and failed prediction instances.

Returns
TypeDescription
Google\Cloud\AIPlatform\V1\CompletionStats|null

hasCompletionStats

clearCompletionStats

setCompletionStats

Output only. Statistics on completed and failed prediction instances.

Parameter
NameDescription
var Google\Cloud\AIPlatform\V1\CompletionStats
Returns
TypeDescription
$this

getCreateTime

Output only. Time when the BatchPredictionJob was created.

Returns
TypeDescription
Google\Protobuf\Timestamp|null

hasCreateTime

clearCreateTime

setCreateTime

Output only. Time when the BatchPredictionJob was created.

Parameter
NameDescription
var Google\Protobuf\Timestamp
Returns
TypeDescription
$this

getStartTime

Output only. Time when the BatchPredictionJob for the first time entered the JOB_STATE_RUNNING state.

Returns
TypeDescription
Google\Protobuf\Timestamp|null

hasStartTime

clearStartTime

setStartTime

Output only. Time when the BatchPredictionJob for the first time entered the JOB_STATE_RUNNING state.

Parameter
NameDescription
var Google\Protobuf\Timestamp
Returns
TypeDescription
$this

getEndTime

Output only. Time when the BatchPredictionJob entered any of the following states: JOB_STATE_SUCCEEDED, JOB_STATE_FAILED, JOB_STATE_CANCELLED.

Returns
TypeDescription
Google\Protobuf\Timestamp|null

hasEndTime

clearEndTime

setEndTime

Output only. Time when the BatchPredictionJob entered any of the following states: JOB_STATE_SUCCEEDED, JOB_STATE_FAILED, JOB_STATE_CANCELLED.

Parameter
NameDescription
var Google\Protobuf\Timestamp
Returns
TypeDescription
$this

getUpdateTime

Output only. Time when the BatchPredictionJob was most recently updated.

Returns
TypeDescription
Google\Protobuf\Timestamp|null

hasUpdateTime

clearUpdateTime

setUpdateTime

Output only. Time when the BatchPredictionJob was most recently updated.

Parameter
NameDescription
var Google\Protobuf\Timestamp
Returns
TypeDescription
$this

getLabels

The labels with user-defined metadata to organize BatchPredictionJobs.

Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.

Returns
TypeDescription
Google\Protobuf\Internal\MapField

setLabels

The labels with user-defined metadata to organize BatchPredictionJobs.

Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.

Parameter
NameDescription
var array|Google\Protobuf\Internal\MapField
Returns
TypeDescription
$this

getEncryptionSpec

Customer-managed encryption key options for a BatchPredictionJob. If this is set, then all resources created by the BatchPredictionJob will be encrypted with the provided encryption key.

Returns
TypeDescription
Google\Cloud\AIPlatform\V1\EncryptionSpec|null

hasEncryptionSpec

clearEncryptionSpec

setEncryptionSpec

Customer-managed encryption key options for a BatchPredictionJob. If this is set, then all resources created by the BatchPredictionJob will be encrypted with the provided encryption key.

Parameter
NameDescription
var Google\Cloud\AIPlatform\V1\EncryptionSpec
Returns
TypeDescription
$this

getDisableContainerLogging

For custom-trained Models and AutoML Tabular Models, the container of the DeployedModel instances will send stderr and stdout streams to Cloud Logging by default. Please note that the logs incur cost, which are subject to Cloud Logging pricing.

User can disable container logging by setting this flag to true.

Returns
TypeDescription
bool

setDisableContainerLogging

For custom-trained Models and AutoML Tabular Models, the container of the DeployedModel instances will send stderr and stdout streams to Cloud Logging by default. Please note that the logs incur cost, which are subject to Cloud Logging pricing.

User can disable container logging by setting this flag to true.

Parameter
NameDescription
var bool
Returns
TypeDescription
$this