Class InputDataConfig (1.14.0)

InputDataConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Specifies Vertex AI owned input data to be used for training, and possibly evaluating, the Model.

This message has oneof_ fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

Attributes

NameDescription
fraction_split google.cloud.aiplatform_v1.types.FractionSplit
Split based on fractions defining the size of each set. This field is a member of `oneof`_ ``split``.
filter_split google.cloud.aiplatform_v1.types.FilterSplit
Split based on the provided filters for each set. This field is a member of `oneof`_ ``split``.
predefined_split google.cloud.aiplatform_v1.types.PredefinedSplit
Supported only for tabular Datasets. Split based on a predefined key. This field is a member of `oneof`_ ``split``.
timestamp_split google.cloud.aiplatform_v1.types.TimestampSplit
Supported only for tabular Datasets. Split based on the timestamp of the input data pieces. This field is a member of `oneof`_ ``split``.
stratified_split google.cloud.aiplatform_v1.types.StratifiedSplit
Supported only for tabular Datasets. Split based on the distribution of the specified column. This field is a member of `oneof`_ ``split``.
gcs_destination google.cloud.aiplatform_v1.types.GcsDestination
The Cloud Storage location where the training data is to be written to. In the given directory a new directory is created with name: ``dataset-
bigquery_destination google.cloud.aiplatform_v1.types.BigQueryDestination
Only applicable to custom training with tabular Dataset with BigQuery source. The BigQuery project location where the training data is to be written to. In the given project a new dataset is created with name ``dataset_
dataset_id str
Required. The ID of the Dataset in the same Project and Location which data will be used to train the Model. The Dataset must use schema compatible with Model being trained, and what is compatible should be described in the used TrainingPipeline's [training_task_definition] [google.cloud.aiplatform.v1.TrainingPipeline.training_task_definition]. For tabular Datasets, all their data is exported to training, to pick and choose from.
annotations_filter str
Applicable only to Datasets that have DataItems and Annotations. A filter on Annotations of the Dataset. Only Annotations that both match this filter and belong to DataItems not ignored by the split method are used in respectively training, validation or test role, depending on the role of the DataItem they are on (for the auto-assigned that role is decided by Vertex AI). A filter with same syntax as the one used in ListAnnotations may be used, but note here it filters across all Annotations of the Dataset, and not just within a single DataItem.
annotation_schema_uri str
Applicable only to custom training with Datasets that have DataItems and Annotations. Cloud Storage URI that points to a YAML file describing the annotation schema. The schema is defined as an OpenAPI 3.0.2 `Schema Object

Inheritance

builtins.object > proto.message.Message > InputDataConfig