TrainingPipeline(mapping=None, *, ignore_unknown_fields=False, **kwargs)
The TrainingPipeline orchestrates tasks associated with training a
Model. It always executes the training task, and optionally may also
export data from AI Platform's Dataset which becomes the training
the Model to AI Platform, and evaluate the Model.
Output only. Resource name of the TrainingPipeline.
Required. The user-defined name of this TrainingPipeline.
Specifies AI Platform owned input data that may be used for training the Model. The TrainingPipeline's ``training_task_definition`` should make clear whether this config is used and if there are any special requirements on how it should be filled. If nothing about this config is mentioned in the ``training_task_definition``, then it should be assumed that the TrainingPipeline does not depend on this configuration.
Required. A Google Cloud Storage path to the YAML file that defines the training task which is responsible for producing the model artifact, and may also include additional auxiliary work. The definition files that can be used here are found in gs://google-cloud- aiplatform/schema/trainingjob/definition/. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
Required. The training task's parameter(s), as specified in the ``training_task_definition``'s ``inputs``.
Output only. The metadata information as specified in the ``training_task_definition``'s ``metadata``. This metadata is an auxiliary runtime and final information about the training task. While the pipeline is running this information is populated only at a best effort basis. Only present if the pipeline's ``training_task_definition`` contains ``metadata`` object.
Describes the Model that may be uploaded (via [ModelService.UploadMode]) by this TrainingPipeline. The TrainingPipeline's ``training_task_definition`` should make clear whether this Model description should be populated, and if there are any special requirements regarding how it should be filled. If nothing is mentioned in the ``training_task_definition``, then it should be assumed that this field should not be filled and the training task either uploads the Model without a need of this information, or that training task does not support uploading a Model as part of the pipeline. When the Pipeline's state becomes ``PIPELINE_STATE_SUCCEEDED`` and the trained Model had been uploaded into AI Platform, then the model_to_upload's resource ``name`` is populated. The Model is always uploaded into the Project and Location in which this pipeline is.
Output only. The detailed state of the pipeline.
Output only. Only populated when the pipeline's state is ``PIPELINE_STATE_FAILED`` or ``PIPELINE_STATE_CANCELLED``.
Output only. Time when the TrainingPipeline was created.
Output only. Time when the TrainingPipeline for the first time entered the ``PIPELINE_STATE_RUNNING`` state.
Output only. Time when the TrainingPipeline entered any of the following states: ``PIPELINE_STATE_SUCCEEDED``, ``PIPELINE_STATE_FAILED``, ``PIPELINE_STATE_CANCELLED``.
Output only. Time when the TrainingPipeline was most recently updated.
The labels with user-defined metadata to organize TrainingPipelines. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
Inheritancebuiltins.object > proto.message.Message > TrainingPipeline
LabelsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)
The abstract base class for a message.
Keys and values corresponding to the fields of the message.
A dictionary or message to be used to determine the values for this message.
If True, do not raise errors for unknown fields. Only applied if