Vertex AI V1 API - Class Google::Cloud::AIPlatform::V1::BatchPredictionJob (v0.59.0)

Reference documentation and code samples for the Vertex AI V1 API class Google::Cloud::AIPlatform::V1::BatchPredictionJob.

A job that uses a Model to produce predictions on multiple [input instances][google.cloud.aiplatform.v1.BatchPredictionJob.input_config]. If predictions for significant portion of the instances fail, the job may finish without attempting predictions for all remaining instances.

Inherits

  • Object

Extended By

  • Google::Protobuf::MessageExts::ClassMethods

Includes

  • Google::Protobuf::MessageExts

Methods

#completion_stats

def completion_stats() -> ::Google::Cloud::AIPlatform::V1::CompletionStats
Returns

#create_time

def create_time() -> ::Google::Protobuf::Timestamp
Returns

#dedicated_resources

def dedicated_resources() -> ::Google::Cloud::AIPlatform::V1::BatchDedicatedResources
Returns
  • (::Google::Cloud::AIPlatform::V1::BatchDedicatedResources) — The config of resources used by the Model during the batch prediction. If the Model supports DEDICATED_RESOURCES this config may be provided (and the job will use these resources), if the Model doesn't support AUTOMATIC_RESOURCES, this config must be provided.

#dedicated_resources=

def dedicated_resources=(value) -> ::Google::Cloud::AIPlatform::V1::BatchDedicatedResources
Parameter
  • value (::Google::Cloud::AIPlatform::V1::BatchDedicatedResources) — The config of resources used by the Model during the batch prediction. If the Model supports DEDICATED_RESOURCES this config may be provided (and the job will use these resources), if the Model doesn't support AUTOMATIC_RESOURCES, this config must be provided.
Returns
  • (::Google::Cloud::AIPlatform::V1::BatchDedicatedResources) — The config of resources used by the Model during the batch prediction. If the Model supports DEDICATED_RESOURCES this config may be provided (and the job will use these resources), if the Model doesn't support AUTOMATIC_RESOURCES, this config must be provided.

#disable_container_logging

def disable_container_logging() -> ::Boolean
Returns
  • (::Boolean) — For custom-trained Models and AutoML Tabular Models, the container of the DeployedModel instances will send stderr and stdout streams to Cloud Logging by default. Please note that the logs incur cost, which are subject to Cloud Logging pricing.

    User can disable container logging by setting this flag to true.

#disable_container_logging=

def disable_container_logging=(value) -> ::Boolean
Parameter
  • value (::Boolean) — For custom-trained Models and AutoML Tabular Models, the container of the DeployedModel instances will send stderr and stdout streams to Cloud Logging by default. Please note that the logs incur cost, which are subject to Cloud Logging pricing.

    User can disable container logging by setting this flag to true.

Returns
  • (::Boolean) — For custom-trained Models and AutoML Tabular Models, the container of the DeployedModel instances will send stderr and stdout streams to Cloud Logging by default. Please note that the logs incur cost, which are subject to Cloud Logging pricing.

    User can disable container logging by setting this flag to true.

#display_name

def display_name() -> ::String
Returns
  • (::String) — Required. The user-defined name of this BatchPredictionJob.

#display_name=

def display_name=(value) -> ::String
Parameter
  • value (::String) — Required. The user-defined name of this BatchPredictionJob.
Returns
  • (::String) — Required. The user-defined name of this BatchPredictionJob.

#encryption_spec

def encryption_spec() -> ::Google::Cloud::AIPlatform::V1::EncryptionSpec
Returns
  • (::Google::Cloud::AIPlatform::V1::EncryptionSpec) — Customer-managed encryption key options for a BatchPredictionJob. If this is set, then all resources created by the BatchPredictionJob will be encrypted with the provided encryption key.

#encryption_spec=

def encryption_spec=(value) -> ::Google::Cloud::AIPlatform::V1::EncryptionSpec
Parameter
  • value (::Google::Cloud::AIPlatform::V1::EncryptionSpec) — Customer-managed encryption key options for a BatchPredictionJob. If this is set, then all resources created by the BatchPredictionJob will be encrypted with the provided encryption key.
Returns
  • (::Google::Cloud::AIPlatform::V1::EncryptionSpec) — Customer-managed encryption key options for a BatchPredictionJob. If this is set, then all resources created by the BatchPredictionJob will be encrypted with the provided encryption key.

#end_time

def end_time() -> ::Google::Protobuf::Timestamp
Returns
  • (::Google::Protobuf::Timestamp) — Output only. Time when the BatchPredictionJob entered any of the following states: JOB_STATE_SUCCEEDED, JOB_STATE_FAILED, JOB_STATE_CANCELLED.

#error

def error() -> ::Google::Rpc::Status
Returns
  • (::Google::Rpc::Status) — Output only. Only populated when the job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.

#explanation_spec

def explanation_spec() -> ::Google::Cloud::AIPlatform::V1::ExplanationSpec
Returns

#explanation_spec=

def explanation_spec=(value) -> ::Google::Cloud::AIPlatform::V1::ExplanationSpec
Parameter
Returns

#generate_explanation

def generate_explanation() -> ::Boolean
Returns
  • (::Boolean) — Generate explanation with the batch prediction results.

    When set to true, the batch prediction output changes based on the predictions_format field of the BatchPredictionJob.output_config object:

    • bigquery: output includes a column named explanation. The value is a struct that conforms to the Explanation object.
    • jsonl: The JSON objects on each line include an additional entry keyed explanation. The value of the entry is a JSON object that conforms to the Explanation object.
    • csv: Generating explanations for CSV format is not supported.

    If this field is set to true, either the Model.explanation_spec or explanation_spec must be populated.

#generate_explanation=

def generate_explanation=(value) -> ::Boolean
Parameter
  • value (::Boolean) — Generate explanation with the batch prediction results.

    When set to true, the batch prediction output changes based on the predictions_format field of the BatchPredictionJob.output_config object:

    • bigquery: output includes a column named explanation. The value is a struct that conforms to the Explanation object.
    • jsonl: The JSON objects on each line include an additional entry keyed explanation. The value of the entry is a JSON object that conforms to the Explanation object.
    • csv: Generating explanations for CSV format is not supported.

    If this field is set to true, either the Model.explanation_spec or explanation_spec must be populated.

Returns
  • (::Boolean) — Generate explanation with the batch prediction results.

    When set to true, the batch prediction output changes based on the predictions_format field of the BatchPredictionJob.output_config object:

    • bigquery: output includes a column named explanation. The value is a struct that conforms to the Explanation object.
    • jsonl: The JSON objects on each line include an additional entry keyed explanation. The value of the entry is a JSON object that conforms to the Explanation object.
    • csv: Generating explanations for CSV format is not supported.

    If this field is set to true, either the Model.explanation_spec or explanation_spec must be populated.

#input_config

def input_config() -> ::Google::Cloud::AIPlatform::V1::BatchPredictionJob::InputConfig
Returns

#input_config=

def input_config=(value) -> ::Google::Cloud::AIPlatform::V1::BatchPredictionJob::InputConfig
Parameter
Returns

#instance_config

def instance_config() -> ::Google::Cloud::AIPlatform::V1::BatchPredictionJob::InstanceConfig
Returns

#instance_config=

def instance_config=(value) -> ::Google::Cloud::AIPlatform::V1::BatchPredictionJob::InstanceConfig
Parameter
Returns

#labels

def labels() -> ::Google::Protobuf::Map{::String => ::String}
Returns
  • (::Google::Protobuf::Map{::String => ::String}) — The labels with user-defined metadata to organize BatchPredictionJobs.

    Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.

    See https://goo.gl/xmQnxf for more information and examples of labels.

#labels=

def labels=(value) -> ::Google::Protobuf::Map{::String => ::String}
Parameter
  • value (::Google::Protobuf::Map{::String => ::String}) — The labels with user-defined metadata to organize BatchPredictionJobs.

    Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.

    See https://goo.gl/xmQnxf for more information and examples of labels.

Returns
  • (::Google::Protobuf::Map{::String => ::String}) — The labels with user-defined metadata to organize BatchPredictionJobs.

    Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.

    See https://goo.gl/xmQnxf for more information and examples of labels.

#manual_batch_tuning_parameters

def manual_batch_tuning_parameters() -> ::Google::Cloud::AIPlatform::V1::ManualBatchTuningParameters
Returns

#manual_batch_tuning_parameters=

def manual_batch_tuning_parameters=(value) -> ::Google::Cloud::AIPlatform::V1::ManualBatchTuningParameters
Parameter
Returns

#model

def model() -> ::String
Returns
  • (::String) — The name of the Model resource that produces the predictions via this job, must share the same ancestor Location. Starting this job has no impact on any existing deployments of the Model and their resources. Exactly one of model and unmanaged_container_model must be set.

    The model resource name may contain version id or version alias to specify the version. Example: projects/{project}/locations/{location}/models/{model}@2 or projects/{project}/locations/{location}/models/{model}@golden if no version is specified, the default version will be deployed.

    The model resource could also be a publisher model. Example: publishers/{publisher}/models/{model} or projects/{project}/locations/{location}/publishers/{publisher}/models/{model}

#model=

def model=(value) -> ::String
Parameter
  • value (::String) — The name of the Model resource that produces the predictions via this job, must share the same ancestor Location. Starting this job has no impact on any existing deployments of the Model and their resources. Exactly one of model and unmanaged_container_model must be set.

    The model resource name may contain version id or version alias to specify the version. Example: projects/{project}/locations/{location}/models/{model}@2 or projects/{project}/locations/{location}/models/{model}@golden if no version is specified, the default version will be deployed.

    The model resource could also be a publisher model. Example: publishers/{publisher}/models/{model} or projects/{project}/locations/{location}/publishers/{publisher}/models/{model}

Returns
  • (::String) — The name of the Model resource that produces the predictions via this job, must share the same ancestor Location. Starting this job has no impact on any existing deployments of the Model and their resources. Exactly one of model and unmanaged_container_model must be set.

    The model resource name may contain version id or version alias to specify the version. Example: projects/{project}/locations/{location}/models/{model}@2 or projects/{project}/locations/{location}/models/{model}@golden if no version is specified, the default version will be deployed.

    The model resource could also be a publisher model. Example: publishers/{publisher}/models/{model} or projects/{project}/locations/{location}/publishers/{publisher}/models/{model}

#model_parameters

def model_parameters() -> ::Google::Protobuf::Value
Returns
  • (::Google::Protobuf::Value) — The parameters that govern the predictions. The schema of the parameters may be specified via the [Model's][google.cloud.aiplatform.v1.BatchPredictionJob.model] [PredictSchemata's][google.cloud.aiplatform.v1.Model.predict_schemata] parameters_schema_uri.

#model_parameters=

def model_parameters=(value) -> ::Google::Protobuf::Value
Parameter
  • value (::Google::Protobuf::Value) — The parameters that govern the predictions. The schema of the parameters may be specified via the [Model's][google.cloud.aiplatform.v1.BatchPredictionJob.model] [PredictSchemata's][google.cloud.aiplatform.v1.Model.predict_schemata] parameters_schema_uri.
Returns
  • (::Google::Protobuf::Value) — The parameters that govern the predictions. The schema of the parameters may be specified via the [Model's][google.cloud.aiplatform.v1.BatchPredictionJob.model] [PredictSchemata's][google.cloud.aiplatform.v1.Model.predict_schemata] parameters_schema_uri.

#model_version_id

def model_version_id() -> ::String
Returns
  • (::String) — Output only. The version ID of the Model that produces the predictions via this job.

#name

def name() -> ::String
Returns
  • (::String) — Output only. Resource name of the BatchPredictionJob.

#output_config

def output_config() -> ::Google::Cloud::AIPlatform::V1::BatchPredictionJob::OutputConfig
Returns

#output_config=

def output_config=(value) -> ::Google::Cloud::AIPlatform::V1::BatchPredictionJob::OutputConfig
Parameter
Returns

#output_info

def output_info() -> ::Google::Cloud::AIPlatform::V1::BatchPredictionJob::OutputInfo
Returns

#partial_failures

def partial_failures() -> ::Array<::Google::Rpc::Status>
Returns
  • (::Array<::Google::Rpc::Status>) — Output only. Partial failures encountered. For example, single files that can't be read. This field never exceeds 20 entries. Status details fields contain standard Google Cloud error details.

#resources_consumed

def resources_consumed() -> ::Google::Cloud::AIPlatform::V1::ResourcesConsumed
Returns
  • (::Google::Cloud::AIPlatform::V1::ResourcesConsumed) — Output only. Information about resources that had been consumed by this job. Provided in real time at best effort basis, as well as a final value once the job completes.

    Note: This field currently may be not populated for batch predictions that use AutoML Models.

#satisfies_pzi

def satisfies_pzi() -> ::Boolean
Returns
  • (::Boolean) — Output only. Reserved for future use.

#satisfies_pzs

def satisfies_pzs() -> ::Boolean
Returns
  • (::Boolean) — Output only. Reserved for future use.

#service_account

def service_account() -> ::String
Returns
  • (::String) — The service account that the DeployedModel's container runs as. If not specified, a system generated one will be used, which has minimal permissions and the custom container, if used, may not have enough permission to access other Google Cloud resources.

    Users deploying the Model must have the iam.serviceAccounts.actAs permission on this service account.

#service_account=

def service_account=(value) -> ::String
Parameter
  • value (::String) — The service account that the DeployedModel's container runs as. If not specified, a system generated one will be used, which has minimal permissions and the custom container, if used, may not have enough permission to access other Google Cloud resources.

    Users deploying the Model must have the iam.serviceAccounts.actAs permission on this service account.

Returns
  • (::String) — The service account that the DeployedModel's container runs as. If not specified, a system generated one will be used, which has minimal permissions and the custom container, if used, may not have enough permission to access other Google Cloud resources.

    Users deploying the Model must have the iam.serviceAccounts.actAs permission on this service account.

#start_time

def start_time() -> ::Google::Protobuf::Timestamp
Returns

#state

def state() -> ::Google::Cloud::AIPlatform::V1::JobState
Returns

#unmanaged_container_model

def unmanaged_container_model() -> ::Google::Cloud::AIPlatform::V1::UnmanagedContainerModel
Returns

#unmanaged_container_model=

def unmanaged_container_model=(value) -> ::Google::Cloud::AIPlatform::V1::UnmanagedContainerModel
Parameter
Returns

#update_time

def update_time() -> ::Google::Protobuf::Timestamp
Returns