- 1.71.0 (latest)
- 1.70.0
- 1.69.0
- 1.68.0
- 1.67.1
- 1.66.0
- 1.65.0
- 1.63.0
- 1.62.0
- 1.60.0
- 1.59.0
- 1.58.0
- 1.57.0
- 1.56.0
- 1.55.0
- 1.54.1
- 1.53.0
- 1.52.0
- 1.51.0
- 1.50.0
- 1.49.0
- 1.48.0
- 1.47.0
- 1.46.0
- 1.45.0
- 1.44.0
- 1.43.0
- 1.39.0
- 1.38.1
- 1.37.0
- 1.36.4
- 1.35.0
- 1.34.0
- 1.33.1
- 1.32.0
- 1.31.1
- 1.30.1
- 1.29.0
- 1.28.1
- 1.27.1
- 1.26.1
- 1.25.0
- 1.24.1
- 1.23.0
- 1.22.1
- 1.21.0
- 1.20.0
- 1.19.1
- 1.18.3
- 1.17.1
- 1.16.1
- 1.15.1
- 1.14.0
- 1.13.1
- 1.12.1
- 1.11.0
- 1.10.0
- 1.9.0
- 1.8.1
- 1.7.1
- 1.6.2
- 1.5.0
- 1.4.3
- 1.3.0
- 1.2.0
- 1.1.1
- 1.0.1
- 0.9.0
- 0.8.0
- 0.7.1
- 0.6.0
- 0.5.1
- 0.4.0
- 0.3.1
BatchPredictionJob(
batch_prediction_job_name: str,
project: typing.Optional[str] = None,
location: typing.Optional[str] = None,
credentials: typing.Optional[google.auth.credentials.Credentials] = None,
)
Retrieves a BatchPredictionJob resource and instantiates its representation.
Parameter |
|
---|---|
Name | Description |
batch_prediction_job_name |
str
Required. A fully-qualified BatchPredictionJob resource name or ID. Example: "projects/.../locations/.../batchPredictionJobs/456" or "456" when project and location are initialized or passed. |
Properties
completion_stats
Statistics on completed and failed prediction instances.
create_time
Time this resource was created.
display_name
Display name of this resource.
encryption_spec
Customer-managed encryption key options for this Vertex AI resource.
If this is set, then all resources created by this Vertex AI resource will be encrypted with the provided encryption key.
end_time
Time when the Job resource entered the JOB_STATE_SUCCEEDED
,
JOB_STATE_FAILED
, or JOB_STATE_CANCELLED
state.
error
Detailed error info for this Job resource. Only populated when the
Job's state is JOB_STATE_FAILED
or JOB_STATE_CANCELLED
.
gca_resource
The underlying resource proto representation.
labels
User-defined labels containing metadata about this resource.
Read more about labels at https://goo.gl/xmQnxf
name
Name of this resource.
output_info
Information describing the output of this job, including output location into which prediction output is written.
This is only available for batch prediction jobs that have run successfully.
partial_failures
Partial failures encountered. For example, single files that can't be read. This field never exceeds 20 entries. Status details fields contain standard GCP error details.
resource_name
Full qualified resource name.
start_time
Time when the Job resource entered the JOB_STATE_RUNNING
for the
first time.
state
Fetch Job again and return the current JobState.
Returns | |
---|---|
Type | Description |
state (job_state.JobState) |
Enum that describes the state of a Vertex AI job. |
update_time
Time this resource was last updated.
Methods
cancel
cancel() -> None
Cancels this Job.
Success of cancellation is not guaranteed. Use Job.state
property to verify if cancellation was successful.
create
create(
job_display_name: str,
model_name: typing.Union[str, google.cloud.aiplatform.models.Model],
instances_format: str = "jsonl",
predictions_format: str = "jsonl",
gcs_source: typing.Optional[typing.Union[str, typing.Sequence[str]]] = None,
bigquery_source: typing.Optional[str] = None,
gcs_destination_prefix: typing.Optional[str] = None,
bigquery_destination_prefix: typing.Optional[str] = None,
model_parameters: typing.Optional[typing.Dict] = None,
machine_type: typing.Optional[str] = None,
accelerator_type: typing.Optional[str] = None,
accelerator_count: typing.Optional[int] = None,
starting_replica_count: typing.Optional[int] = None,
max_replica_count: typing.Optional[int] = None,
generate_explanation: typing.Optional[bool] = False,
explanation_metadata: typing.Optional[
google.cloud.aiplatform_v1.types.explanation_metadata.ExplanationMetadata
] = None,
explanation_parameters: typing.Optional[
google.cloud.aiplatform_v1.types.explanation.ExplanationParameters
] = None,
labels: typing.Optional[typing.Dict[str, str]] = None,
project: typing.Optional[str] = None,
location: typing.Optional[str] = None,
credentials: typing.Optional[google.auth.credentials.Credentials] = None,
encryption_spec_key_name: typing.Optional[str] = None,
sync: bool = True,
create_request_timeout: typing.Optional[float] = None,
batch_size: typing.Optional[int] = None,
model_monitoring_objective_config: typing.Optional[
google.cloud.aiplatform.model_monitoring.objective.ObjectiveConfig
] = None,
model_monitoring_alert_config: typing.Optional[
google.cloud.aiplatform.model_monitoring.alert.AlertConfig
] = None,
analysis_instance_schema_uri: typing.Optional[str] = None,
service_account: typing.Optional[str] = None,
) -> google.cloud.aiplatform.jobs.BatchPredictionJob
Create a batch prediction job.
Parameters | |
---|---|
Name | Description |
job_display_name |
str
Required. The user-defined name of the BatchPredictionJob. The name can be up to 128 characters long and can be consist of any UTF-8 characters. |
model_name |
Union[str, aiplatform.Model]
Required. A fully-qualified model resource name or model ID. Example: "projects/123/locations/us-central1/models/456" or "456" when project and location are initialized or passed. May optionally contain a version ID or alias in {model_name}@{version} form. Or an instance of aiplatform.Model. |
instances_format |
str
Required. The format in which instances are provided. Must be one of the formats listed in |
predictions_format |
str
Required. The format in which Vertex AI outputs the predictions, must be one of the formats specified in |
gcs_source |
Optional[Sequence[str]]
Google Cloud Storage URI(-s) to your instances to run batch prediction on. They must match |
bigquery_source |
Optional[str]
BigQuery URI to a table, up to 2000 characters long. For example: |
gcs_destination_prefix |
Optional[str]
The Google Cloud Storage location of the directory where the output is to be written to. In the given directory a new directory is created. Its name is |
bigquery_destination_prefix |
Optional[str]
The BigQuery project or dataset location where the output is to be written to. If project is provided, a new dataset is created with name |
model_parameters |
Optional[Dict]
The parameters that govern the predictions. The schema of the parameters may be specified via the Model's |
machine_type |
Optional[str]
The type of machine for running batch prediction on dedicated resources. Not specifying machine type will result in batch prediction job being run with automatic resources. |
accelerator_type |
Optional[str]
The type of accelerator(s) that may be attached to the machine as per |
accelerator_count |
Optional[int]
The number of accelerators to attach to the |
starting_replica_count |
Optional[int]
The number of machine replicas used at the start of the batch operation. If not set, Vertex AI decides starting number, not greater than |
max_replica_count |
Optional[int]
The maximum number of machine replicas the batch operation may be scaled to. Only used if |
generate_explanation |
bool
Optional. Generate explanation along with the batch prediction results. This will cause the batch prediction output to include explanations based on the |
explanation_metadata |
aiplatform.explain.ExplanationMetadata
Optional. Explanation metadata configuration for this BatchPredictionJob. Can be specified only if |
explanation_parameters |
aiplatform.explain.ExplanationParameters
Optional. Parameters to configure explaining for Model's predictions. Can be specified only if |
labels |
Dict[str, str]
Optional. The labels with user-defined metadata to organize your BatchPredictionJobs. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. |
credentials |
Optional[auth_credentials.Credentials]
Custom credentials to use to create this batch prediction job. Overrides credentials set in aiplatform.init. |
encryption_spec_key_name |
Optional[str]
Optional. The Cloud KMS resource identifier of the customer managed encryption key used to protect the job. Has the form: |
sync |
bool
Whether to execute this method synchronously. If False, this method will be executed in concurrent Future and any downstream object will be immediately returned and synced when the Future has completed. |
create_request_timeout |
float
Optional. The timeout for the create request in seconds. |
batch_size |
int
Optional. The number of the records (e.g. instances) of the operation given in each batch to a machine replica. Machine type, and size of a single record should be considered when setting this parameter, higher value speeds up the batch operation's execution, but too high value will result in a whole batch not fitting in a machine's memory, and the whole operation will fail. The default value is 64. |
model_monitoring_objective_config |
aiplatform.model_monitoring.ObjectiveConfig
Optional. The objective config for model monitoring. Passing this parameter enables monitoring on the model associated with this batch prediction job. |
model_monitoring_alert_config |
aiplatform.model_monitoring.EmailAlertConfig
Optional. Configures how model monitoring alerts are sent to the user. Right now only email alert is supported. |
analysis_instance_schema_uri |
str
Optional. Only applicable if model_monitoring_objective_config is also passed. This parameter specifies the YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze. If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string. |
service_account |
str
Optional. Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. |
Returns | |
---|---|
Type | Description |
(jobs.BatchPredictionJob) |
Instantiated representation of the created batch prediction job. |
delete
delete(sync: bool = True) -> None
Deletes this Vertex AI resource. WARNING: This deletion is permanent.
done
done() -> bool
Method indicating whether a job has completed.
iter_outputs
iter_outputs(
bq_max_results: typing.Optional[int] = 100,
) -> typing.Union[
typing.Iterable[storage.Blob], typing.Iterable[bigquery.table.RowIterator]
]
Returns an Iterable object to traverse the output files, either a list of GCS Blobs or a BigQuery RowIterator depending on the output config set when the BatchPredictionJob was created.
Parameter | |
---|---|
Name | Description |
bq_max_results |
typing.Optional[int]
Optional[int] = 100 Limit on rows to retrieve from prediction table in BigQuery dataset. Only used when retrieving predictions from a bigquery_destination_prefix. Default is 100. |
Exceptions | |
---|---|
Type | Description |
RuntimeError |
If BatchPredictionJob is in a JobState other than SUCCEEDED, since outputs cannot be retrieved until the Job has finished. |
NotImplementedError |
If BatchPredictionJob succeeded and output_info does not have a GCS or BQ output provided. |
Returns | |
---|---|
Type | Description |
Union[Iterable[storage.Blob], Iterable[bigquery.table.RowIterator]] |
Either a list of GCS Blob objects within the prediction output directory or an iterable BigQuery RowIterator with predictions. |
list
list(
filter: typing.Optional[str] = None,
order_by: typing.Optional[str] = None,
project: typing.Optional[str] = None,
location: typing.Optional[str] = None,
credentials: typing.Optional[google.auth.credentials.Credentials] = None,
) -> typing.List[google.cloud.aiplatform.base.VertexAiResourceNoun]
List all instances of this Job Resource.
Example Usage:
aiplatform.BatchPredictionJobs.list( filter='state="JOB_STATE_SUCCEEDED" AND display_name="my_job"', )
Parameters | |
---|---|
Name | Description |
filter |
str
Optional. An expression for filtering the results of the request. For field names both snake_case and camelCase are supported. |
order_by |
str
Optional. A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending. Supported fields: |
project |
str
Optional. Project to retrieve list from. If not set, project set in aiplatform.init will be used. |
location |
str
Optional. Location to retrieve list from. If not set, location set in aiplatform.init will be used. |
credentials |
auth_credentials.Credentials
Optional. Custom credentials to use to retrieve list. Overrides credentials set in aiplatform.init. |
submit
submit(
*,
job_display_name: typing.Optional[str] = None,
model_name: typing.Union[str, google.cloud.aiplatform.models.Model],
instances_format: str = "jsonl",
predictions_format: str = "jsonl",
gcs_source: typing.Optional[typing.Union[str, typing.Sequence[str]]] = None,
bigquery_source: typing.Optional[str] = None,
gcs_destination_prefix: typing.Optional[str] = None,
bigquery_destination_prefix: typing.Optional[str] = None,
model_parameters: typing.Optional[typing.Dict] = None,
machine_type: typing.Optional[str] = None,
accelerator_type: typing.Optional[str] = None,
accelerator_count: typing.Optional[int] = None,
starting_replica_count: typing.Optional[int] = None,
max_replica_count: typing.Optional[int] = None,
generate_explanation: typing.Optional[bool] = False,
explanation_metadata: typing.Optional[
google.cloud.aiplatform_v1.types.explanation_metadata.ExplanationMetadata
] = None,
explanation_parameters: typing.Optional[
google.cloud.aiplatform_v1.types.explanation.ExplanationParameters
] = None,
labels: typing.Optional[typing.Dict[str, str]] = None,
project: typing.Optional[str] = None,
location: typing.Optional[str] = None,
credentials: typing.Optional[google.auth.credentials.Credentials] = None,
encryption_spec_key_name: typing.Optional[str] = None,
create_request_timeout: typing.Optional[float] = None,
batch_size: typing.Optional[int] = None,
model_monitoring_objective_config: typing.Optional[
google.cloud.aiplatform.model_monitoring.objective.ObjectiveConfig
] = None,
model_monitoring_alert_config: typing.Optional[
google.cloud.aiplatform.model_monitoring.alert.AlertConfig
] = None,
analysis_instance_schema_uri: typing.Optional[str] = None,
service_account: typing.Optional[str] = None
) -> google.cloud.aiplatform.jobs.BatchPredictionJob
Sumbit a batch prediction job (not waiting for completion).
Parameters | |
---|---|
Name | Description |
job_display_name |
str
Required. The user-defined name of the BatchPredictionJob. The name can be up to 128 characters long and can be consist of any UTF-8 characters. |
model_name |
Union[str, aiplatform.Model]
Required. A fully-qualified model resource name or model ID. Example: "projects/123/locations/us-central1/models/456" or "456" when project and location are initialized or passed. May optionally contain a version ID or alias in {model_name}@{version} form. Or an instance of aiplatform.Model. |
instances_format |
str
Required. The format in which instances are provided. Must be one of the formats listed in |
predictions_format |
str
Required. The format in which Vertex AI outputs the predictions, must be one of the formats specified in |
gcs_source |
Optional[Sequence[str]]
Google Cloud Storage URI(-s) to your instances to run batch prediction on. They must match |
bigquery_source |
Optional[str]
BigQuery URI to a table, up to 2000 characters long. For example: |
gcs_destination_prefix |
Optional[str]
The Google Cloud Storage location of the directory where the output is to be written to. In the given directory a new directory is created. Its name is |
bigquery_destination_prefix |
Optional[str]
The BigQuery project or dataset location where the output is to be written to. If project is provided, a new dataset is created with name |
model_parameters |
Optional[Dict]
The parameters that govern the predictions. The schema of the parameters may be specified via the Model's |
machine_type |
Optional[str]
The type of machine for running batch prediction on dedicated resources. Not specifying machine type will result in batch prediction job being run with automatic resources. |
accelerator_type |
Optional[str]
The type of accelerator(s) that may be attached to the machine as per |
accelerator_count |
Optional[int]
The number of accelerators to attach to the |
starting_replica_count |
Optional[int]
The number of machine replicas used at the start of the batch operation. If not set, Vertex AI decides starting number, not greater than |
max_replica_count |
Optional[int]
The maximum number of machine replicas the batch operation may be scaled to. Only used if |
generate_explanation |
bool
Optional. Generate explanation along with the batch prediction results. This will cause the batch prediction output to include explanations based on the |
explanation_metadata |
aiplatform.explain.ExplanationMetadata
Optional. Explanation metadata configuration for this BatchPredictionJob. Can be specified only if |
explanation_parameters |
aiplatform.explain.ExplanationParameters
Optional. Parameters to configure explaining for Model's predictions. Can be specified only if |
labels |
Dict[str, str]
Optional. The labels with user-defined metadata to organize your BatchPredictionJobs. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. |
credentials |
Optional[auth_credentials.Credentials]
Custom credentials to use to create this batch prediction job. Overrides credentials set in aiplatform.init. |
encryption_spec_key_name |
Optional[str]
Optional. The Cloud KMS resource identifier of the customer managed encryption key used to protect the job. Has the form: |
create_request_timeout |
float
Optional. The timeout for the create request in seconds. |
batch_size |
int
Optional. The number of the records (e.g. instances) of the operation given in each batch to a machine replica. Machine type, and size of a single record should be considered when setting this parameter, higher value speeds up the batch operation's execution, but too high value will result in a whole batch not fitting in a machine's memory, and the whole operation will fail. The default value is 64. |
model_monitoring_objective_config |
aiplatform.model_monitoring.ObjectiveConfig
Optional. The objective config for model monitoring. Passing this parameter enables monitoring on the model associated with this batch prediction job. |
model_monitoring_alert_config |
aiplatform.model_monitoring.EmailAlertConfig
Optional. Configures how model monitoring alerts are sent to the user. Right now only email alert is supported. |
analysis_instance_schema_uri |
str
Optional. Only applicable if model_monitoring_objective_config is also passed. This parameter specifies the YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze. If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string. |
service_account |
str
Optional. Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. |
Returns | |
---|---|
Type | Description |
(jobs.BatchPredictionJob) |
Instantiated representation of the created batch prediction job. |
to_dict
to_dict() -> typing.Dict[str, typing.Any]
Returns the resource proto as a dictionary.
wait
wait()
Helper method that blocks until all futures are complete.
wait_for_completion
wait_for_completion() -> None
Waits for job to complete.
Exceptions | |
---|---|
Type | Description |
RuntimeError |
If job failed or cancelled. |
wait_for_resource_creation
wait_for_resource_creation() -> None
Waits until resource has been created.