- 1.73.0 (latest)
- 1.72.0
- 1.71.1
- 1.70.0
- 1.69.0
- 1.68.0
- 1.67.1
- 1.66.0
- 1.65.0
- 1.63.0
- 1.62.0
- 1.60.0
- 1.59.0
- 1.58.0
- 1.57.0
- 1.56.0
- 1.55.0
- 1.54.1
- 1.53.0
- 1.52.0
- 1.51.0
- 1.50.0
- 1.49.0
- 1.48.0
- 1.47.0
- 1.46.0
- 1.45.0
- 1.44.0
- 1.43.0
- 1.39.0
- 1.38.1
- 1.37.0
- 1.36.4
- 1.35.0
- 1.34.0
- 1.33.1
- 1.32.0
- 1.31.1
- 1.30.1
- 1.29.0
- 1.28.1
- 1.27.1
- 1.26.1
- 1.25.0
- 1.24.1
- 1.23.0
- 1.22.1
- 1.21.0
- 1.20.0
- 1.19.1
- 1.18.3
- 1.17.1
- 1.16.1
- 1.15.1
- 1.14.0
- 1.13.1
- 1.12.1
- 1.11.0
- 1.10.0
- 1.9.0
- 1.8.1
- 1.7.1
- 1.6.2
- 1.5.0
- 1.4.3
- 1.3.0
- 1.2.0
- 1.1.1
- 1.0.1
- 0.9.0
- 0.8.0
- 0.7.1
- 0.6.0
- 0.5.1
- 0.4.0
- 0.3.1
PipelineJob(
display_name: str,
template_path: str,
job_id: Optional[str] = None,
pipeline_root: Optional[str] = None,
parameter_values: Optional[Dict[str, Any]] = None,
enable_caching: Optional[bool] = None,
encryption_spec_key_name: Optional[str] = None,
labels: Optional[Dict[str, str]] = None,
credentials: Optional[google.auth.credentials.Credentials] = None,
project: Optional[str] = None,
location: Optional[str] = None,
)
Retrieves a PipelineJob resource and instantiates its representation.
Parameters
Name | Description |
display_name |
str
Required. The user-defined name of this Pipeline. |
template_path |
str
Required. The path of PipelineJob or PipelineSpec JSON or YAML file. It can be a local path or a Google Cloud Storage URI. Example: "gs://project.name" |
job_id |
str
Optional. The unique ID of the job run. If not specified, pipeline name + timestamp will be used. |
pipeline_root |
str
Optional. The root of the pipeline outputs. Default to be staging bucket. |
parameter_values |
Dict[str, Any]
Optional. The mapping from runtime parameter names to its values that control the pipeline run. |
enable_caching |
bool
Optional. Whether to turn on caching for the run. If this is not set, defaults to the compile time settings, which are True for all tasks by default, while users may specify different caching options for individual tasks. If this is set, the setting applies to all tasks in the pipeline. Overrides the compile time settings. |
encryption_spec_key_name |
str
Optional. The Cloud KMS resource identifier of the customer managed encryption key used to protect the job. Has the form: |
labels |
Dict[str, str]
Optional. The user defined metadata to organize PipelineJob. |
credentials |
auth_credentials.Credentials
Optional. Custom credentials to use to create this PipelineJob. Overrides credentials set in aiplatform.init. |
project |
str
Optional. The project that you want to run this PipelineJob in. If not set, the project set in aiplatform.init will be used. |
location |
str
Optional. Location to create PipelineJob. If not set, location set in aiplatform.init will be used. |
Inheritance
builtins.object > google.cloud.aiplatform.base.VertexAiResourceNoun > builtins.object > google.cloud.aiplatform.base.FutureManager > google.cloud.aiplatform.base.VertexAiResourceNounWithFutureManager > builtins.object > abc.ABC > google.cloud.aiplatform.base.DoneMixin > google.cloud.aiplatform.base.StatefulResource > google.cloud.aiplatform.base.VertexAiStatefulResource > PipelineJobProperties
has_failed
Returns True if pipeline has failed.
False otherwise.
state
Current pipeline state.
Methods
cancel
cancel()
Starts asynchronous cancellation on the PipelineJob. The server
makes a best effort to cancel the job, but success is not guaranteed.
On successful cancellation, the PipelineJob is not deleted; instead it
becomes a job with state set to CANCELLED
.
clone
clone(
display_name: Optional[str] = None,
job_id: Optional[str] = None,
pipeline_root: Optional[str] = None,
parameter_values: Optional[Dict[str, Any]] = None,
enable_caching: Optional[bool] = None,
encryption_spec_key_name: Optional[str] = None,
labels: Optional[Dict[str, str]] = None,
credentials: Optional[google.auth.credentials.Credentials] = None,
project: Optional[str] = None,
location: Optional[str] = None,
)
Returns a new PipelineJob object with the same settings as the original one.
Name | Description |
display_name |
str
Optional. The user-defined name of this cloned Pipeline. If not specified, original pipeline display name will be used. |
job_id |
str
Optional. The unique ID of the job run. If not specified, "cloned" + pipeline name + timestamp will be used. |
pipeline_root |
str
Optional. The root of the pipeline outputs. Default to be the same staging bucket as original pipeline. |
parameter_values |
Dict[str, Any]
Optional. The mapping from runtime parameter names to its values that control the pipeline run. Defaults to be the same values as original PipelineJob. |
enable_caching |
bool
Optional. Whether to turn on caching for the run. If this is not set, defaults to be the same as original pipeline. If this is set, the setting applies to all tasks in the pipeline. |
encryption_spec_key_name |
str
Optional. The Cloud KMS resource identifier of the customer managed encryption key used to protect the job. Has the form: |
labels |
Dict[str, str]
Optional. The user defined metadata to organize PipelineJob. |
credentials |
auth_credentials.Credentials
Optional. Custom credentials to use to create this PipelineJob. Overrides credentials set in aiplatform.init. |
project |
str
Optional. The project that you want to run this PipelineJob in. If not set, the project set in original PipelineJob will be used. |
location |
str
Optional. Location to create PipelineJob. If not set, location set in original PipelineJob will be used. |
Type | Description |
ValueError | If job_id or labels have incorrect format. |
get
get(
resource_name: str,
project: Optional[str] = None,
location: Optional[str] = None,
credentials: Optional[google.auth.credentials.Credentials] = None,
)
Get a Vertex AI Pipeline Job for the given resource_name.
Name | Description |
resource_name |
str
Required. A fully-qualified resource name or ID. |
project |
str
Optional. Project to retrieve dataset from. If not set, project set in aiplatform.init will be used. |
location |
str
Optional. Location to retrieve dataset from. If not set, location set in aiplatform.init will be used. |
credentials |
auth_credentials.Credentials
Optional. Custom credentials to use to upload this model. Overrides credentials set in aiplatform.init. |
list
list(
filter: Optional[str] = None,
order_by: Optional[str] = None,
project: Optional[str] = None,
location: Optional[str] = None,
credentials: Optional[google.auth.credentials.Credentials] = None,
)
List all instances of this PipelineJob resource.
Example Usage:
aiplatform.PipelineJob.list( filter='display_name="experiment_a27"', order_by='create_time desc' )
Name | Description |
filter |
str
Optional. An expression for filtering the results of the request. For field names both snake_case and camelCase are supported. |
order_by |
str
Optional. A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending. Supported fields: |
project |
str
Optional. Project to retrieve list from. If not set, project set in aiplatform.init will be used. |
location |
str
Optional. Location to retrieve list from. If not set, location set in aiplatform.init will be used. |
credentials |
auth_credentials.Credentials
Optional. Custom credentials to use to retrieve list. Overrides credentials set in aiplatform.init. |
run
run(
service_account: Optional[str] = None,
network: Optional[str] = None,
sync: Optional[bool] = True,
create_request_timeout: Optional[float] = None,
)
Run this configured PipelineJob and monitor the job until completion.
Name | Description |
service_account |
str
Optional. Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. |
network |
str
Optional. The full name of the Compute Engine network to which the job should be peered. For example, projects/12345/global/networks/myVPC. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network. |
sync |
bool
Optional. Whether to execute this method synchronously. If False, this method will unblock and it will be executed in a concurrent Future. |
create_request_timeout |
float
Optional. The timeout for the create request in seconds. |
submit
submit(
service_account: Optional[str] = None,
network: Optional[str] = None,
create_request_timeout: Optional[float] = None,
)
Run this configured PipelineJob.
Name | Description |
service_account |
str
Optional. Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. |
network |
str
Optional. The full name of the Compute Engine network to which the job should be peered. For example, projects/12345/global/networks/myVPC. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network. |
create_request_timeout |
float
Optional. The timeout for the create request in seconds. |
wait
wait()
Wait for thie PipelineJob to complete.
wait_for_resource_creation
wait_for_resource_creation()
Waits until resource has been created.