Class PipelineJob (1.15.0)

PipelineJob(
    display_name: str,
    template_path: str,
    job_id: Optional[str] = None,
    pipeline_root: Optional[str] = None,
    parameter_values: Optional[Dict[str, Any]] = None,
    enable_caching: Optional[bool] = None,
    encryption_spec_key_name: Optional[str] = None,
    labels: Optional[Dict[str, str]] = None,
    credentials: Optional[google.auth.credentials.Credentials] = None,
    project: Optional[str] = None,
    location: Optional[str] = None,
    failure_policy: Optional[str] = None,
)

Retrieves a PipelineJob resource and instantiates its representation.

Parameters

NameDescription
display_name str

Required. The user-defined name of this Pipeline.

template_path str

Required. The path of PipelineJob or PipelineSpec JSON or YAML file. It can be a local path, a Google Cloud Storage URI (e.g. "gs://project.name"), or an Artifact Registry URI (e.g. "https://us-central1-kfp.pkg.dev/proj/repo/pack/latest").

job_id str

Optional. The unique ID of the job run. If not specified, pipeline name + timestamp will be used.

pipeline_root str

Optional. The root of the pipeline outputs. Default to be staging bucket.

parameter_values Dict[str, Any]

Optional. The mapping from runtime parameter names to its values that control the pipeline run.

enable_caching bool

Optional. Whether to turn on caching for the run. If this is not set, defaults to the compile time settings, which are True for all tasks by default, while users may specify different caching options for individual tasks. If this is set, the setting applies to all tasks in the pipeline. Overrides the compile time settings.

encryption_spec_key_name str

Optional. The Cloud KMS resource identifier of the customer managed encryption key used to protect the job. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created. If this is set, then all resources created by the PipelineJob will be encrypted with the provided encryption key. Overrides encryption_spec_key_name set in aiplatform.init.

labels Dict[str, str]

Optional. The user defined metadata to organize PipelineJob.

credentials auth_credentials.Credentials

Optional. Custom credentials to use to create this PipelineJob. Overrides credentials set in aiplatform.init.

project str

Optional. The project that you want to run this PipelineJob in. If not set, the project set in aiplatform.init will be used.

location str

Optional. Location to create PipelineJob. If not set, location set in aiplatform.init will be used.

failure_policy str

Optional. The failure policy - "slow" or "fast". Currently, the default of a pipeline is that the pipeline will continue to run until no more tasks can be executed, also known as PIPELINE_FAILURE_POLICY_FAIL_SLOW (corresponds to "slow"). However, if a pipeline is set to PIPELINE_FAILURE_POLICY_FAIL_FAST (corresponds to "fast"), it will stop scheduling any new tasks when a task has failed. Any scheduled tasks will continue to completion.

Inheritance

builtins.object > google.cloud.aiplatform.base.VertexAiResourceNoun > builtins.object > google.cloud.aiplatform.base.FutureManager > google.cloud.aiplatform.base.VertexAiResourceNounWithFutureManager > builtins.object > abc.ABC > google.cloud.aiplatform.base.DoneMixin > google.cloud.aiplatform.base.StatefulResource > google.cloud.aiplatform.base.VertexAiStatefulResource > builtins.object > abc.ABC > google.cloud.aiplatform.metadata.experiment_resources._ExperimentLoggable > PipelineJob

Properties

has_failed

Returns True if pipeline has failed.

False otherwise.

state

Current pipeline state.

Methods

cancel

cancel()

Starts asynchronous cancellation on the PipelineJob. The server makes a best effort to cancel the job, but success is not guaranteed. On successful cancellation, the PipelineJob is not deleted; instead it becomes a job with state set to CANCELLED.

clone

clone(
    display_name: Optional[str] = None,
    job_id: Optional[str] = None,
    pipeline_root: Optional[str] = None,
    parameter_values: Optional[Dict[str, Any]] = None,
    enable_caching: Optional[bool] = None,
    encryption_spec_key_name: Optional[str] = None,
    labels: Optional[Dict[str, str]] = None,
    credentials: Optional[google.auth.credentials.Credentials] = None,
    project: Optional[str] = None,
    location: Optional[str] = None,
)

Returns a new PipelineJob object with the same settings as the original one.

Parameters
NameDescription
display_name str

Optional. The user-defined name of this cloned Pipeline. If not specified, original pipeline display name will be used.

job_id str

Optional. The unique ID of the job run. If not specified, "cloned" + pipeline name + timestamp will be used.

pipeline_root str

Optional. The root of the pipeline outputs. Default to be the same staging bucket as original pipeline.

parameter_values Dict[str, Any]

Optional. The mapping from runtime parameter names to its values that control the pipeline run. Defaults to be the same values as original PipelineJob.

enable_caching bool

Optional. Whether to turn on caching for the run. If this is not set, defaults to be the same as original pipeline. If this is set, the setting applies to all tasks in the pipeline.

encryption_spec_key_name str

Optional. The Cloud KMS resource identifier of the customer managed encryption key used to protect the job. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created. If this is set, then all resources created by the PipelineJob will be encrypted with the provided encryption key. If not specified, encryption_spec of original PipelineJob will be used.

labels Dict[str, str]

Optional. The user defined metadata to organize PipelineJob.

credentials auth_credentials.Credentials

Optional. Custom credentials to use to create this PipelineJob. Overrides credentials set in aiplatform.init.

project str

Optional. The project that you want to run this PipelineJob in. If not set, the project set in original PipelineJob will be used.

location str

Optional. Location to create PipelineJob. If not set, location set in original PipelineJob will be used.

Exceptions
TypeDescription
ValueErrorIf job_id or labels have incorrect format.

done

done()

Helper method that return True is PipelineJob is done. False otherwise.

get

get(
    resource_name: str,
    project: Optional[str] = None,
    location: Optional[str] = None,
    credentials: Optional[google.auth.credentials.Credentials] = None,
)

Get a Vertex AI Pipeline Job for the given resource_name.

Parameters
NameDescription
resource_name str

Required. A fully-qualified resource name or ID.

project str

Optional. Project to retrieve dataset from. If not set, project set in aiplatform.init will be used.

location str

Optional. Location to retrieve dataset from. If not set, location set in aiplatform.init will be used.

credentials auth_credentials.Credentials

Optional. Custom credentials to use to upload this model. Overrides credentials set in aiplatform.init.

list

list(
    filter: Optional[str] = None,
    order_by: Optional[str] = None,
    project: Optional[str] = None,
    location: Optional[str] = None,
    credentials: Optional[google.auth.credentials.Credentials] = None,
)

List all instances of this PipelineJob resource.

Example Usage:

aiplatform.PipelineJob.list( filter='display_name="experiment_a27"', order_by='create_time desc' )

Parameters
NameDescription
filter str

Optional. An expression for filtering the results of the request. For field names both snake_case and camelCase are supported.

order_by str

Optional. A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending. Supported fields: display_name, create_time, update_time

project str

Optional. Project to retrieve list from. If not set, project set in aiplatform.init will be used.

location str

Optional. Location to retrieve list from. If not set, location set in aiplatform.init will be used.

credentials auth_credentials.Credentials

Optional. Custom credentials to use to retrieve list. Overrides credentials set in aiplatform.init.

run

run(
    service_account: Optional[str] = None,
    network: Optional[str] = None,
    sync: Optional[bool] = True,
    create_request_timeout: Optional[float] = None,
)

Run this configured PipelineJob and monitor the job until completion.

Parameters
NameDescription
service_account str

Optional. Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account.

network str

Optional. The full name of the Compute Engine network to which the job should be peered. For example, projects/12345/global/networks/myVPC. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.

sync bool

Optional. Whether to execute this method synchronously. If False, this method will unblock and it will be executed in a concurrent Future.

create_request_timeout float

Optional. The timeout for the create request in seconds.

submit

submit(
    service_account: Optional[str] = None,
    network: Optional[str] = None,
    create_request_timeout: Optional[float] = None,
    *,
    experiment: Optional[
        Union[str, google.cloud.aiplatform.metadata.experiment_resources.Experiment]
    ] = None
)

Run this configured PipelineJob.

Parameters
NameDescription
service_account str

Optional. Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account.

network str

Optional. The full name of the Compute Engine network to which the job should be peered. For example, projects/12345/global/networks/myVPC. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.

create_request_timeout float

Optional. The timeout for the create request in seconds.

experiment Union[str, experiments_resource.Experiment]

Optional. The Vertex AI experiment name or instance to associate to this PipelineJob. Metrics produced by the PipelineJob as system.Metric Artifacts will be associated as metrics to the current Experiment Run. Pipeline parameters will be associated as parameters to the current Experiment Run.

wait

wait()

Wait for this PipelineJob to complete.

wait_for_resource_creation

wait_for_resource_creation()

Waits until resource has been created.