Class InputDataConfig.Builder (3.10.0)

public static final class InputDataConfig.Builder extends GeneratedMessageV3.Builder<InputDataConfig.Builder> implements InputDataConfigOrBuilder

Specifies Vertex AI owned input data to be used for training, and possibly evaluating, the Model.

Protobuf type google.cloud.aiplatform.v1.InputDataConfig

Static Methods

getDescriptor()

public static final Descriptors.Descriptor getDescriptor()
Returns
TypeDescription
Descriptor

Methods

addRepeatedField(Descriptors.FieldDescriptor field, Object value)

public InputDataConfig.Builder addRepeatedField(Descriptors.FieldDescriptor field, Object value)
Parameters
NameDescription
fieldFieldDescriptor
valueObject
Returns
TypeDescription
InputDataConfig.Builder
Overrides

build()

public InputDataConfig build()
Returns
TypeDescription
InputDataConfig

buildPartial()

public InputDataConfig buildPartial()
Returns
TypeDescription
InputDataConfig

clear()

public InputDataConfig.Builder clear()
Returns
TypeDescription
InputDataConfig.Builder
Overrides

clearAnnotationSchemaUri()

public InputDataConfig.Builder clearAnnotationSchemaUri()

Applicable only to custom training with Datasets that have DataItems and Annotations. Cloud Storage URI that points to a YAML file describing the annotation schema. The schema is defined as an OpenAPI 3.0.2 Schema Object. The schema files that can be used here are found in gs://google-cloud-aiplatform/schema/dataset/annotation/ , note that the chosen schema must be consistent with metadata of the Dataset specified by dataset_id. Only Annotations that both match this schema and belong to DataItems not ignored by the split method are used in respectively training, validation or test role, depending on the role of the DataItem they are on. When used in conjunction with annotations_filter, the Annotations used for training are filtered by both annotations_filter and annotation_schema_uri.

string annotation_schema_uri = 9;

Returns
TypeDescription
InputDataConfig.Builder

This builder for chaining.

clearAnnotationsFilter()

public InputDataConfig.Builder clearAnnotationsFilter()

Applicable only to Datasets that have DataItems and Annotations. A filter on Annotations of the Dataset. Only Annotations that both match this filter and belong to DataItems not ignored by the split method are used in respectively training, validation or test role, depending on the role of the DataItem they are on (for the auto-assigned that role is decided by Vertex AI). A filter with same syntax as the one used in ListAnnotations may be used, but note here it filters across all Annotations of the Dataset, and not just within a single DataItem.

string annotations_filter = 6;

Returns
TypeDescription
InputDataConfig.Builder

This builder for chaining.

clearBigqueryDestination()

public InputDataConfig.Builder clearBigqueryDestination()

Only applicable to custom training with tabular Dataset with BigQuery source. The BigQuery project location where the training data is to be written to. In the given project a new dataset is created with name dataset_<dataset-id><annotation-type><timestamp-of-training-call> where timestamp is in YYYY_MM_DDThh_mm_ss_sssZ format. All training input data is written into that dataset. In the dataset three tables are created, training, validation and test.

  • AIP_DATA_FORMAT = "bigquery".
  • AIP_TRAINING_DATA_URI = "bigquery_destination.dataset_<dataset-id><annotation-type><time>.training"
  • AIP_VALIDATION_DATA_URI = "bigquery_destination.dataset_<dataset-id><annotation-type><time>.validation"
  • AIP_TEST_DATA_URI = "bigquery_destination.dataset_<dataset-id><annotation-type><time>.test"

.google.cloud.aiplatform.v1.BigQueryDestination bigquery_destination = 10;

Returns
TypeDescription
InputDataConfig.Builder

clearDatasetId()

public InputDataConfig.Builder clearDatasetId()

Required. The ID of the Dataset in the same Project and Location which data will be used to train the Model. The Dataset must use schema compatible with Model being trained, and what is compatible should be described in the used TrainingPipeline's [training_task_definition] [google.cloud.aiplatform.v1.TrainingPipeline.training_task_definition]. For tabular Datasets, all their data is exported to training, to pick and choose from.

string dataset_id = 1 [(.google.api.field_behavior) = REQUIRED];

Returns
TypeDescription
InputDataConfig.Builder

This builder for chaining.

clearDestination()

public InputDataConfig.Builder clearDestination()
Returns
TypeDescription
InputDataConfig.Builder

clearField(Descriptors.FieldDescriptor field)

public InputDataConfig.Builder clearField(Descriptors.FieldDescriptor field)
Parameter
NameDescription
fieldFieldDescriptor
Returns
TypeDescription
InputDataConfig.Builder
Overrides

clearFilterSplit()

public InputDataConfig.Builder clearFilterSplit()

Split based on the provided filters for each set.

.google.cloud.aiplatform.v1.FilterSplit filter_split = 3;

Returns
TypeDescription
InputDataConfig.Builder

clearFractionSplit()

public InputDataConfig.Builder clearFractionSplit()

Split based on fractions defining the size of each set.

.google.cloud.aiplatform.v1.FractionSplit fraction_split = 2;

Returns
TypeDescription
InputDataConfig.Builder

clearGcsDestination()

public InputDataConfig.Builder clearGcsDestination()

The Cloud Storage location where the training data is to be written to. In the given directory a new directory is created with name: dataset-<dataset-id>-<annotation-type>-<timestamp-of-training-call> where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. All training input data is written into that directory. The Vertex AI environment variables representing Cloud Storage data URIs are represented in the Cloud Storage wildcard format to support sharded data. e.g.: "gs://.../training-*.jsonl"

  • AIP_DATA_FORMAT = "jsonl" for non-tabular data, "csv" for tabular data
  • AIP_TRAINING_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/training-*.${AIP_DATA_FORMAT}"
  • AIP_VALIDATION_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/validation-*.${AIP_DATA_FORMAT}"
  • AIP_TEST_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/test-*.${AIP_DATA_FORMAT}"

.google.cloud.aiplatform.v1.GcsDestination gcs_destination = 8;

Returns
TypeDescription
InputDataConfig.Builder

clearOneof(Descriptors.OneofDescriptor oneof)

public InputDataConfig.Builder clearOneof(Descriptors.OneofDescriptor oneof)
Parameter
NameDescription
oneofOneofDescriptor
Returns
TypeDescription
InputDataConfig.Builder
Overrides

clearPersistMlUseAssignment()

public InputDataConfig.Builder clearPersistMlUseAssignment()

Whether to persist the ML use assignment to data item system labels.

bool persist_ml_use_assignment = 11;

Returns
TypeDescription
InputDataConfig.Builder

This builder for chaining.

clearPredefinedSplit()

public InputDataConfig.Builder clearPredefinedSplit()

Supported only for tabular Datasets. Split based on a predefined key.

.google.cloud.aiplatform.v1.PredefinedSplit predefined_split = 4;

Returns
TypeDescription
InputDataConfig.Builder

clearSavedQueryId()

public InputDataConfig.Builder clearSavedQueryId()

Only applicable to Datasets that have SavedQueries. The ID of a SavedQuery (annotation set) under the Dataset specified by dataset_id used for filtering Annotations for training. Only Annotations that are associated with this SavedQuery are used in respectively training. When used in conjunction with annotations_filter, the Annotations used for training are filtered by both saved_query_id and annotations_filter. Only one of saved_query_id and annotation_schema_uri should be specified as both of them represent the same thing: problem type.

string saved_query_id = 7;

Returns
TypeDescription
InputDataConfig.Builder

This builder for chaining.

clearSplit()

public InputDataConfig.Builder clearSplit()
Returns
TypeDescription
InputDataConfig.Builder

clearStratifiedSplit()

public InputDataConfig.Builder clearStratifiedSplit()

Supported only for tabular Datasets. Split based on the distribution of the specified column.

.google.cloud.aiplatform.v1.StratifiedSplit stratified_split = 12;

Returns
TypeDescription
InputDataConfig.Builder

clearTimestampSplit()

public InputDataConfig.Builder clearTimestampSplit()

Supported only for tabular Datasets. Split based on the timestamp of the input data pieces.

.google.cloud.aiplatform.v1.TimestampSplit timestamp_split = 5;

Returns
TypeDescription
InputDataConfig.Builder

clone()

public InputDataConfig.Builder clone()
Returns
TypeDescription
InputDataConfig.Builder
Overrides

getAnnotationSchemaUri()

public String getAnnotationSchemaUri()

Applicable only to custom training with Datasets that have DataItems and Annotations. Cloud Storage URI that points to a YAML file describing the annotation schema. The schema is defined as an OpenAPI 3.0.2 Schema Object. The schema files that can be used here are found in gs://google-cloud-aiplatform/schema/dataset/annotation/ , note that the chosen schema must be consistent with metadata of the Dataset specified by dataset_id. Only Annotations that both match this schema and belong to DataItems not ignored by the split method are used in respectively training, validation or test role, depending on the role of the DataItem they are on. When used in conjunction with annotations_filter, the Annotations used for training are filtered by both annotations_filter and annotation_schema_uri.

string annotation_schema_uri = 9;

Returns
TypeDescription
String

The annotationSchemaUri.

getAnnotationSchemaUriBytes()

public ByteString getAnnotationSchemaUriBytes()

Applicable only to custom training with Datasets that have DataItems and Annotations. Cloud Storage URI that points to a YAML file describing the annotation schema. The schema is defined as an OpenAPI 3.0.2 Schema Object. The schema files that can be used here are found in gs://google-cloud-aiplatform/schema/dataset/annotation/ , note that the chosen schema must be consistent with metadata of the Dataset specified by dataset_id. Only Annotations that both match this schema and belong to DataItems not ignored by the split method are used in respectively training, validation or test role, depending on the role of the DataItem they are on. When used in conjunction with annotations_filter, the Annotations used for training are filtered by both annotations_filter and annotation_schema_uri.

string annotation_schema_uri = 9;

Returns
TypeDescription
ByteString

The bytes for annotationSchemaUri.

getAnnotationsFilter()

public String getAnnotationsFilter()

Applicable only to Datasets that have DataItems and Annotations. A filter on Annotations of the Dataset. Only Annotations that both match this filter and belong to DataItems not ignored by the split method are used in respectively training, validation or test role, depending on the role of the DataItem they are on (for the auto-assigned that role is decided by Vertex AI). A filter with same syntax as the one used in ListAnnotations may be used, but note here it filters across all Annotations of the Dataset, and not just within a single DataItem.

string annotations_filter = 6;

Returns
TypeDescription
String

The annotationsFilter.

getAnnotationsFilterBytes()

public ByteString getAnnotationsFilterBytes()

Applicable only to Datasets that have DataItems and Annotations. A filter on Annotations of the Dataset. Only Annotations that both match this filter and belong to DataItems not ignored by the split method are used in respectively training, validation or test role, depending on the role of the DataItem they are on (for the auto-assigned that role is decided by Vertex AI). A filter with same syntax as the one used in ListAnnotations may be used, but note here it filters across all Annotations of the Dataset, and not just within a single DataItem.

string annotations_filter = 6;

Returns
TypeDescription
ByteString

The bytes for annotationsFilter.

getBigqueryDestination()

public BigQueryDestination getBigqueryDestination()

Only applicable to custom training with tabular Dataset with BigQuery source. The BigQuery project location where the training data is to be written to. In the given project a new dataset is created with name dataset_<dataset-id><annotation-type><timestamp-of-training-call> where timestamp is in YYYY_MM_DDThh_mm_ss_sssZ format. All training input data is written into that dataset. In the dataset three tables are created, training, validation and test.

  • AIP_DATA_FORMAT = "bigquery".
  • AIP_TRAINING_DATA_URI = "bigquery_destination.dataset_<dataset-id><annotation-type><time>.training"
  • AIP_VALIDATION_DATA_URI = "bigquery_destination.dataset_<dataset-id><annotation-type><time>.validation"
  • AIP_TEST_DATA_URI = "bigquery_destination.dataset_<dataset-id><annotation-type><time>.test"

.google.cloud.aiplatform.v1.BigQueryDestination bigquery_destination = 10;

Returns
TypeDescription
BigQueryDestination

The bigqueryDestination.

getBigqueryDestinationBuilder()

public BigQueryDestination.Builder getBigqueryDestinationBuilder()

Only applicable to custom training with tabular Dataset with BigQuery source. The BigQuery project location where the training data is to be written to. In the given project a new dataset is created with name dataset_<dataset-id><annotation-type><timestamp-of-training-call> where timestamp is in YYYY_MM_DDThh_mm_ss_sssZ format. All training input data is written into that dataset. In the dataset three tables are created, training, validation and test.

  • AIP_DATA_FORMAT = "bigquery".
  • AIP_TRAINING_DATA_URI = "bigquery_destination.dataset_<dataset-id><annotation-type><time>.training"
  • AIP_VALIDATION_DATA_URI = "bigquery_destination.dataset_<dataset-id><annotation-type><time>.validation"
  • AIP_TEST_DATA_URI = "bigquery_destination.dataset_<dataset-id><annotation-type><time>.test"

.google.cloud.aiplatform.v1.BigQueryDestination bigquery_destination = 10;

Returns
TypeDescription
BigQueryDestination.Builder

getBigqueryDestinationOrBuilder()

public BigQueryDestinationOrBuilder getBigqueryDestinationOrBuilder()

Only applicable to custom training with tabular Dataset with BigQuery source. The BigQuery project location where the training data is to be written to. In the given project a new dataset is created with name dataset_<dataset-id><annotation-type><timestamp-of-training-call> where timestamp is in YYYY_MM_DDThh_mm_ss_sssZ format. All training input data is written into that dataset. In the dataset three tables are created, training, validation and test.

  • AIP_DATA_FORMAT = "bigquery".
  • AIP_TRAINING_DATA_URI = "bigquery_destination.dataset_<dataset-id><annotation-type><time>.training"
  • AIP_VALIDATION_DATA_URI = "bigquery_destination.dataset_<dataset-id><annotation-type><time>.validation"
  • AIP_TEST_DATA_URI = "bigquery_destination.dataset_<dataset-id><annotation-type><time>.test"

.google.cloud.aiplatform.v1.BigQueryDestination bigquery_destination = 10;

Returns
TypeDescription
BigQueryDestinationOrBuilder

getDatasetId()

public String getDatasetId()

Required. The ID of the Dataset in the same Project and Location which data will be used to train the Model. The Dataset must use schema compatible with Model being trained, and what is compatible should be described in the used TrainingPipeline's [training_task_definition] [google.cloud.aiplatform.v1.TrainingPipeline.training_task_definition]. For tabular Datasets, all their data is exported to training, to pick and choose from.

string dataset_id = 1 [(.google.api.field_behavior) = REQUIRED];

Returns
TypeDescription
String

The datasetId.

getDatasetIdBytes()

public ByteString getDatasetIdBytes()

Required. The ID of the Dataset in the same Project and Location which data will be used to train the Model. The Dataset must use schema compatible with Model being trained, and what is compatible should be described in the used TrainingPipeline's [training_task_definition] [google.cloud.aiplatform.v1.TrainingPipeline.training_task_definition]. For tabular Datasets, all their data is exported to training, to pick and choose from.

string dataset_id = 1 [(.google.api.field_behavior) = REQUIRED];

Returns
TypeDescription
ByteString

The bytes for datasetId.

getDefaultInstanceForType()

public InputDataConfig getDefaultInstanceForType()
Returns
TypeDescription
InputDataConfig

getDescriptorForType()

public Descriptors.Descriptor getDescriptorForType()
Returns
TypeDescription
Descriptor
Overrides

getDestinationCase()

public InputDataConfig.DestinationCase getDestinationCase()
Returns
TypeDescription
InputDataConfig.DestinationCase

getFilterSplit()

public FilterSplit getFilterSplit()

Split based on the provided filters for each set.

.google.cloud.aiplatform.v1.FilterSplit filter_split = 3;

Returns
TypeDescription
FilterSplit

The filterSplit.

getFilterSplitBuilder()

public FilterSplit.Builder getFilterSplitBuilder()

Split based on the provided filters for each set.

.google.cloud.aiplatform.v1.FilterSplit filter_split = 3;

Returns
TypeDescription
FilterSplit.Builder

getFilterSplitOrBuilder()

public FilterSplitOrBuilder getFilterSplitOrBuilder()

Split based on the provided filters for each set.

.google.cloud.aiplatform.v1.FilterSplit filter_split = 3;

Returns
TypeDescription
FilterSplitOrBuilder

getFractionSplit()

public FractionSplit getFractionSplit()

Split based on fractions defining the size of each set.

.google.cloud.aiplatform.v1.FractionSplit fraction_split = 2;

Returns
TypeDescription
FractionSplit

The fractionSplit.

getFractionSplitBuilder()

public FractionSplit.Builder getFractionSplitBuilder()

Split based on fractions defining the size of each set.

.google.cloud.aiplatform.v1.FractionSplit fraction_split = 2;

Returns
TypeDescription
FractionSplit.Builder

getFractionSplitOrBuilder()

public FractionSplitOrBuilder getFractionSplitOrBuilder()

Split based on fractions defining the size of each set.

.google.cloud.aiplatform.v1.FractionSplit fraction_split = 2;

Returns
TypeDescription
FractionSplitOrBuilder

getGcsDestination()

public GcsDestination getGcsDestination()

The Cloud Storage location where the training data is to be written to. In the given directory a new directory is created with name: dataset-<dataset-id>-<annotation-type>-<timestamp-of-training-call> where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. All training input data is written into that directory. The Vertex AI environment variables representing Cloud Storage data URIs are represented in the Cloud Storage wildcard format to support sharded data. e.g.: "gs://.../training-*.jsonl"

  • AIP_DATA_FORMAT = "jsonl" for non-tabular data, "csv" for tabular data
  • AIP_TRAINING_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/training-*.${AIP_DATA_FORMAT}"
  • AIP_VALIDATION_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/validation-*.${AIP_DATA_FORMAT}"
  • AIP_TEST_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/test-*.${AIP_DATA_FORMAT}"

.google.cloud.aiplatform.v1.GcsDestination gcs_destination = 8;

Returns
TypeDescription
GcsDestination

The gcsDestination.

getGcsDestinationBuilder()

public GcsDestination.Builder getGcsDestinationBuilder()

The Cloud Storage location where the training data is to be written to. In the given directory a new directory is created with name: dataset-<dataset-id>-<annotation-type>-<timestamp-of-training-call> where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. All training input data is written into that directory. The Vertex AI environment variables representing Cloud Storage data URIs are represented in the Cloud Storage wildcard format to support sharded data. e.g.: "gs://.../training-*.jsonl"

  • AIP_DATA_FORMAT = "jsonl" for non-tabular data, "csv" for tabular data
  • AIP_TRAINING_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/training-*.${AIP_DATA_FORMAT}"
  • AIP_VALIDATION_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/validation-*.${AIP_DATA_FORMAT}"
  • AIP_TEST_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/test-*.${AIP_DATA_FORMAT}"

.google.cloud.aiplatform.v1.GcsDestination gcs_destination = 8;

Returns
TypeDescription
GcsDestination.Builder

getGcsDestinationOrBuilder()

public GcsDestinationOrBuilder getGcsDestinationOrBuilder()

The Cloud Storage location where the training data is to be written to. In the given directory a new directory is created with name: dataset-<dataset-id>-<annotation-type>-<timestamp-of-training-call> where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. All training input data is written into that directory. The Vertex AI environment variables representing Cloud Storage data URIs are represented in the Cloud Storage wildcard format to support sharded data. e.g.: "gs://.../training-*.jsonl"

  • AIP_DATA_FORMAT = "jsonl" for non-tabular data, "csv" for tabular data
  • AIP_TRAINING_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/training-*.${AIP_DATA_FORMAT}"
  • AIP_VALIDATION_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/validation-*.${AIP_DATA_FORMAT}"
  • AIP_TEST_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/test-*.${AIP_DATA_FORMAT}"

.google.cloud.aiplatform.v1.GcsDestination gcs_destination = 8;

Returns
TypeDescription
GcsDestinationOrBuilder

getPersistMlUseAssignment()

public boolean getPersistMlUseAssignment()

Whether to persist the ML use assignment to data item system labels.

bool persist_ml_use_assignment = 11;

Returns
TypeDescription
boolean

The persistMlUseAssignment.

getPredefinedSplit()

public PredefinedSplit getPredefinedSplit()

Supported only for tabular Datasets. Split based on a predefined key.

.google.cloud.aiplatform.v1.PredefinedSplit predefined_split = 4;

Returns
TypeDescription
PredefinedSplit

The predefinedSplit.

getPredefinedSplitBuilder()

public PredefinedSplit.Builder getPredefinedSplitBuilder()

Supported only for tabular Datasets. Split based on a predefined key.

.google.cloud.aiplatform.v1.PredefinedSplit predefined_split = 4;

Returns
TypeDescription
PredefinedSplit.Builder

getPredefinedSplitOrBuilder()

public PredefinedSplitOrBuilder getPredefinedSplitOrBuilder()

Supported only for tabular Datasets. Split based on a predefined key.

.google.cloud.aiplatform.v1.PredefinedSplit predefined_split = 4;

Returns
TypeDescription
PredefinedSplitOrBuilder

getSavedQueryId()

public String getSavedQueryId()

Only applicable to Datasets that have SavedQueries. The ID of a SavedQuery (annotation set) under the Dataset specified by dataset_id used for filtering Annotations for training. Only Annotations that are associated with this SavedQuery are used in respectively training. When used in conjunction with annotations_filter, the Annotations used for training are filtered by both saved_query_id and annotations_filter. Only one of saved_query_id and annotation_schema_uri should be specified as both of them represent the same thing: problem type.

string saved_query_id = 7;

Returns
TypeDescription
String

The savedQueryId.

getSavedQueryIdBytes()

public ByteString getSavedQueryIdBytes()

Only applicable to Datasets that have SavedQueries. The ID of a SavedQuery (annotation set) under the Dataset specified by dataset_id used for filtering Annotations for training. Only Annotations that are associated with this SavedQuery are used in respectively training. When used in conjunction with annotations_filter, the Annotations used for training are filtered by both saved_query_id and annotations_filter. Only one of saved_query_id and annotation_schema_uri should be specified as both of them represent the same thing: problem type.

string saved_query_id = 7;

Returns
TypeDescription
ByteString

The bytes for savedQueryId.

getSplitCase()

public InputDataConfig.SplitCase getSplitCase()
Returns
TypeDescription
InputDataConfig.SplitCase

getStratifiedSplit()

public StratifiedSplit getStratifiedSplit()

Supported only for tabular Datasets. Split based on the distribution of the specified column.

.google.cloud.aiplatform.v1.StratifiedSplit stratified_split = 12;

Returns
TypeDescription
StratifiedSplit

The stratifiedSplit.

getStratifiedSplitBuilder()

public StratifiedSplit.Builder getStratifiedSplitBuilder()

Supported only for tabular Datasets. Split based on the distribution of the specified column.

.google.cloud.aiplatform.v1.StratifiedSplit stratified_split = 12;

Returns
TypeDescription
StratifiedSplit.Builder

getStratifiedSplitOrBuilder()

public StratifiedSplitOrBuilder getStratifiedSplitOrBuilder()

Supported only for tabular Datasets. Split based on the distribution of the specified column.

.google.cloud.aiplatform.v1.StratifiedSplit stratified_split = 12;

Returns
TypeDescription
StratifiedSplitOrBuilder

getTimestampSplit()

public TimestampSplit getTimestampSplit()

Supported only for tabular Datasets. Split based on the timestamp of the input data pieces.

.google.cloud.aiplatform.v1.TimestampSplit timestamp_split = 5;

Returns
TypeDescription
TimestampSplit

The timestampSplit.

getTimestampSplitBuilder()

public TimestampSplit.Builder getTimestampSplitBuilder()

Supported only for tabular Datasets. Split based on the timestamp of the input data pieces.

.google.cloud.aiplatform.v1.TimestampSplit timestamp_split = 5;

Returns
TypeDescription
TimestampSplit.Builder

getTimestampSplitOrBuilder()

public TimestampSplitOrBuilder getTimestampSplitOrBuilder()

Supported only for tabular Datasets. Split based on the timestamp of the input data pieces.

.google.cloud.aiplatform.v1.TimestampSplit timestamp_split = 5;

Returns
TypeDescription
TimestampSplitOrBuilder

hasBigqueryDestination()

public boolean hasBigqueryDestination()

Only applicable to custom training with tabular Dataset with BigQuery source. The BigQuery project location where the training data is to be written to. In the given project a new dataset is created with name dataset_<dataset-id><annotation-type><timestamp-of-training-call> where timestamp is in YYYY_MM_DDThh_mm_ss_sssZ format. All training input data is written into that dataset. In the dataset three tables are created, training, validation and test.

  • AIP_DATA_FORMAT = "bigquery".
  • AIP_TRAINING_DATA_URI = "bigquery_destination.dataset_<dataset-id><annotation-type><time>.training"
  • AIP_VALIDATION_DATA_URI = "bigquery_destination.dataset_<dataset-id><annotation-type><time>.validation"
  • AIP_TEST_DATA_URI = "bigquery_destination.dataset_<dataset-id><annotation-type><time>.test"

.google.cloud.aiplatform.v1.BigQueryDestination bigquery_destination = 10;

Returns
TypeDescription
boolean

Whether the bigqueryDestination field is set.

hasFilterSplit()

public boolean hasFilterSplit()

Split based on the provided filters for each set.

.google.cloud.aiplatform.v1.FilterSplit filter_split = 3;

Returns
TypeDescription
boolean

Whether the filterSplit field is set.

hasFractionSplit()

public boolean hasFractionSplit()

Split based on fractions defining the size of each set.

.google.cloud.aiplatform.v1.FractionSplit fraction_split = 2;

Returns
TypeDescription
boolean

Whether the fractionSplit field is set.

hasGcsDestination()

public boolean hasGcsDestination()

The Cloud Storage location where the training data is to be written to. In the given directory a new directory is created with name: dataset-<dataset-id>-<annotation-type>-<timestamp-of-training-call> where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. All training input data is written into that directory. The Vertex AI environment variables representing Cloud Storage data URIs are represented in the Cloud Storage wildcard format to support sharded data. e.g.: "gs://.../training-*.jsonl"

  • AIP_DATA_FORMAT = "jsonl" for non-tabular data, "csv" for tabular data
  • AIP_TRAINING_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/training-*.${AIP_DATA_FORMAT}"
  • AIP_VALIDATION_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/validation-*.${AIP_DATA_FORMAT}"
  • AIP_TEST_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/test-*.${AIP_DATA_FORMAT}"

.google.cloud.aiplatform.v1.GcsDestination gcs_destination = 8;

Returns
TypeDescription
boolean

Whether the gcsDestination field is set.

hasPredefinedSplit()

public boolean hasPredefinedSplit()

Supported only for tabular Datasets. Split based on a predefined key.

.google.cloud.aiplatform.v1.PredefinedSplit predefined_split = 4;

Returns
TypeDescription
boolean

Whether the predefinedSplit field is set.

hasStratifiedSplit()

public boolean hasStratifiedSplit()

Supported only for tabular Datasets. Split based on the distribution of the specified column.

.google.cloud.aiplatform.v1.StratifiedSplit stratified_split = 12;

Returns
TypeDescription
boolean

Whether the stratifiedSplit field is set.

hasTimestampSplit()

public boolean hasTimestampSplit()

Supported only for tabular Datasets. Split based on the timestamp of the input data pieces.

.google.cloud.aiplatform.v1.TimestampSplit timestamp_split = 5;

Returns
TypeDescription
boolean

Whether the timestampSplit field is set.

internalGetFieldAccessorTable()

protected GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()
Returns
TypeDescription
FieldAccessorTable
Overrides

isInitialized()

public final boolean isInitialized()
Returns
TypeDescription
boolean
Overrides

mergeBigqueryDestination(BigQueryDestination value)

public InputDataConfig.Builder mergeBigqueryDestination(BigQueryDestination value)

Only applicable to custom training with tabular Dataset with BigQuery source. The BigQuery project location where the training data is to be written to. In the given project a new dataset is created with name dataset_<dataset-id><annotation-type><timestamp-of-training-call> where timestamp is in YYYY_MM_DDThh_mm_ss_sssZ format. All training input data is written into that dataset. In the dataset three tables are created, training, validation and test.

  • AIP_DATA_FORMAT = "bigquery".
  • AIP_TRAINING_DATA_URI = "bigquery_destination.dataset_<dataset-id><annotation-type><time>.training"
  • AIP_VALIDATION_DATA_URI = "bigquery_destination.dataset_<dataset-id><annotation-type><time>.validation"
  • AIP_TEST_DATA_URI = "bigquery_destination.dataset_<dataset-id><annotation-type><time>.test"

.google.cloud.aiplatform.v1.BigQueryDestination bigquery_destination = 10;

Parameter
NameDescription
valueBigQueryDestination
Returns
TypeDescription
InputDataConfig.Builder

mergeFilterSplit(FilterSplit value)

public InputDataConfig.Builder mergeFilterSplit(FilterSplit value)

Split based on the provided filters for each set.

.google.cloud.aiplatform.v1.FilterSplit filter_split = 3;

Parameter
NameDescription
valueFilterSplit
Returns
TypeDescription
InputDataConfig.Builder

mergeFractionSplit(FractionSplit value)

public InputDataConfig.Builder mergeFractionSplit(FractionSplit value)

Split based on fractions defining the size of each set.

.google.cloud.aiplatform.v1.FractionSplit fraction_split = 2;

Parameter
NameDescription
valueFractionSplit
Returns
TypeDescription
InputDataConfig.Builder

mergeFrom(InputDataConfig other)

public InputDataConfig.Builder mergeFrom(InputDataConfig other)
Parameter
NameDescription
otherInputDataConfig
Returns
TypeDescription
InputDataConfig.Builder

mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)

public InputDataConfig.Builder mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)
Parameters
NameDescription
inputCodedInputStream
extensionRegistryExtensionRegistryLite
Returns
TypeDescription
InputDataConfig.Builder
Overrides Exceptions
TypeDescription
IOException

mergeFrom(Message other)

public InputDataConfig.Builder mergeFrom(Message other)
Parameter
NameDescription
otherMessage
Returns
TypeDescription
InputDataConfig.Builder
Overrides

mergeGcsDestination(GcsDestination value)

public InputDataConfig.Builder mergeGcsDestination(GcsDestination value)

The Cloud Storage location where the training data is to be written to. In the given directory a new directory is created with name: dataset-<dataset-id>-<annotation-type>-<timestamp-of-training-call> where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. All training input data is written into that directory. The Vertex AI environment variables representing Cloud Storage data URIs are represented in the Cloud Storage wildcard format to support sharded data. e.g.: "gs://.../training-*.jsonl"

  • AIP_DATA_FORMAT = "jsonl" for non-tabular data, "csv" for tabular data
  • AIP_TRAINING_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/training-*.${AIP_DATA_FORMAT}"
  • AIP_VALIDATION_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/validation-*.${AIP_DATA_FORMAT}"
  • AIP_TEST_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/test-*.${AIP_DATA_FORMAT}"

.google.cloud.aiplatform.v1.GcsDestination gcs_destination = 8;

Parameter
NameDescription
valueGcsDestination
Returns
TypeDescription
InputDataConfig.Builder

mergePredefinedSplit(PredefinedSplit value)

public InputDataConfig.Builder mergePredefinedSplit(PredefinedSplit value)

Supported only for tabular Datasets. Split based on a predefined key.

.google.cloud.aiplatform.v1.PredefinedSplit predefined_split = 4;

Parameter
NameDescription
valuePredefinedSplit
Returns
TypeDescription
InputDataConfig.Builder

mergeStratifiedSplit(StratifiedSplit value)

public InputDataConfig.Builder mergeStratifiedSplit(StratifiedSplit value)

Supported only for tabular Datasets. Split based on the distribution of the specified column.

.google.cloud.aiplatform.v1.StratifiedSplit stratified_split = 12;

Parameter
NameDescription
valueStratifiedSplit
Returns
TypeDescription
InputDataConfig.Builder

mergeTimestampSplit(TimestampSplit value)

public InputDataConfig.Builder mergeTimestampSplit(TimestampSplit value)

Supported only for tabular Datasets. Split based on the timestamp of the input data pieces.

.google.cloud.aiplatform.v1.TimestampSplit timestamp_split = 5;

Parameter
NameDescription
valueTimestampSplit
Returns
TypeDescription
InputDataConfig.Builder

mergeUnknownFields(UnknownFieldSet unknownFields)

public final InputDataConfig.Builder mergeUnknownFields(UnknownFieldSet unknownFields)
Parameter
NameDescription
unknownFieldsUnknownFieldSet
Returns
TypeDescription
InputDataConfig.Builder
Overrides

setAnnotationSchemaUri(String value)

public InputDataConfig.Builder setAnnotationSchemaUri(String value)

Applicable only to custom training with Datasets that have DataItems and Annotations. Cloud Storage URI that points to a YAML file describing the annotation schema. The schema is defined as an OpenAPI 3.0.2 Schema Object. The schema files that can be used here are found in gs://google-cloud-aiplatform/schema/dataset/annotation/ , note that the chosen schema must be consistent with metadata of the Dataset specified by dataset_id. Only Annotations that both match this schema and belong to DataItems not ignored by the split method are used in respectively training, validation or test role, depending on the role of the DataItem they are on. When used in conjunction with annotations_filter, the Annotations used for training are filtered by both annotations_filter and annotation_schema_uri.

string annotation_schema_uri = 9;

Parameter
NameDescription
valueString

The annotationSchemaUri to set.

Returns
TypeDescription
InputDataConfig.Builder

This builder for chaining.

setAnnotationSchemaUriBytes(ByteString value)

public InputDataConfig.Builder setAnnotationSchemaUriBytes(ByteString value)

Applicable only to custom training with Datasets that have DataItems and Annotations. Cloud Storage URI that points to a YAML file describing the annotation schema. The schema is defined as an OpenAPI 3.0.2 Schema Object. The schema files that can be used here are found in gs://google-cloud-aiplatform/schema/dataset/annotation/ , note that the chosen schema must be consistent with metadata of the Dataset specified by dataset_id. Only Annotations that both match this schema and belong to DataItems not ignored by the split method are used in respectively training, validation or test role, depending on the role of the DataItem they are on. When used in conjunction with annotations_filter, the Annotations used for training are filtered by both annotations_filter and annotation_schema_uri.

string annotation_schema_uri = 9;

Parameter
NameDescription
valueByteString

The bytes for annotationSchemaUri to set.

Returns
TypeDescription
InputDataConfig.Builder

This builder for chaining.

setAnnotationsFilter(String value)

public InputDataConfig.Builder setAnnotationsFilter(String value)

Applicable only to Datasets that have DataItems and Annotations. A filter on Annotations of the Dataset. Only Annotations that both match this filter and belong to DataItems not ignored by the split method are used in respectively training, validation or test role, depending on the role of the DataItem they are on (for the auto-assigned that role is decided by Vertex AI). A filter with same syntax as the one used in ListAnnotations may be used, but note here it filters across all Annotations of the Dataset, and not just within a single DataItem.

string annotations_filter = 6;

Parameter
NameDescription
valueString

The annotationsFilter to set.

Returns
TypeDescription
InputDataConfig.Builder

This builder for chaining.

setAnnotationsFilterBytes(ByteString value)

public InputDataConfig.Builder setAnnotationsFilterBytes(ByteString value)

Applicable only to Datasets that have DataItems and Annotations. A filter on Annotations of the Dataset. Only Annotations that both match this filter and belong to DataItems not ignored by the split method are used in respectively training, validation or test role, depending on the role of the DataItem they are on (for the auto-assigned that role is decided by Vertex AI). A filter with same syntax as the one used in ListAnnotations may be used, but note here it filters across all Annotations of the Dataset, and not just within a single DataItem.

string annotations_filter = 6;

Parameter
NameDescription
valueByteString

The bytes for annotationsFilter to set.

Returns
TypeDescription
InputDataConfig.Builder

This builder for chaining.

setBigqueryDestination(BigQueryDestination value)

public InputDataConfig.Builder setBigqueryDestination(BigQueryDestination value)

Only applicable to custom training with tabular Dataset with BigQuery source. The BigQuery project location where the training data is to be written to. In the given project a new dataset is created with name dataset_<dataset-id><annotation-type><timestamp-of-training-call> where timestamp is in YYYY_MM_DDThh_mm_ss_sssZ format. All training input data is written into that dataset. In the dataset three tables are created, training, validation and test.

  • AIP_DATA_FORMAT = "bigquery".
  • AIP_TRAINING_DATA_URI = "bigquery_destination.dataset_<dataset-id><annotation-type><time>.training"
  • AIP_VALIDATION_DATA_URI = "bigquery_destination.dataset_<dataset-id><annotation-type><time>.validation"
  • AIP_TEST_DATA_URI = "bigquery_destination.dataset_<dataset-id><annotation-type><time>.test"

.google.cloud.aiplatform.v1.BigQueryDestination bigquery_destination = 10;

Parameter
NameDescription
valueBigQueryDestination
Returns
TypeDescription
InputDataConfig.Builder

setBigqueryDestination(BigQueryDestination.Builder builderForValue)

public InputDataConfig.Builder setBigqueryDestination(BigQueryDestination.Builder builderForValue)

Only applicable to custom training with tabular Dataset with BigQuery source. The BigQuery project location where the training data is to be written to. In the given project a new dataset is created with name dataset_<dataset-id><annotation-type><timestamp-of-training-call> where timestamp is in YYYY_MM_DDThh_mm_ss_sssZ format. All training input data is written into that dataset. In the dataset three tables are created, training, validation and test.

  • AIP_DATA_FORMAT = "bigquery".
  • AIP_TRAINING_DATA_URI = "bigquery_destination.dataset_<dataset-id><annotation-type><time>.training"
  • AIP_VALIDATION_DATA_URI = "bigquery_destination.dataset_<dataset-id><annotation-type><time>.validation"
  • AIP_TEST_DATA_URI = "bigquery_destination.dataset_<dataset-id><annotation-type><time>.test"

.google.cloud.aiplatform.v1.BigQueryDestination bigquery_destination = 10;

Parameter
NameDescription
builderForValueBigQueryDestination.Builder
Returns
TypeDescription
InputDataConfig.Builder

setDatasetId(String value)

public InputDataConfig.Builder setDatasetId(String value)

Required. The ID of the Dataset in the same Project and Location which data will be used to train the Model. The Dataset must use schema compatible with Model being trained, and what is compatible should be described in the used TrainingPipeline's [training_task_definition] [google.cloud.aiplatform.v1.TrainingPipeline.training_task_definition]. For tabular Datasets, all their data is exported to training, to pick and choose from.

string dataset_id = 1 [(.google.api.field_behavior) = REQUIRED];

Parameter
NameDescription
valueString

The datasetId to set.

Returns
TypeDescription
InputDataConfig.Builder

This builder for chaining.

setDatasetIdBytes(ByteString value)

public InputDataConfig.Builder setDatasetIdBytes(ByteString value)

Required. The ID of the Dataset in the same Project and Location which data will be used to train the Model. The Dataset must use schema compatible with Model being trained, and what is compatible should be described in the used TrainingPipeline's [training_task_definition] [google.cloud.aiplatform.v1.TrainingPipeline.training_task_definition]. For tabular Datasets, all their data is exported to training, to pick and choose from.

string dataset_id = 1 [(.google.api.field_behavior) = REQUIRED];

Parameter
NameDescription
valueByteString

The bytes for datasetId to set.

Returns
TypeDescription
InputDataConfig.Builder

This builder for chaining.

setField(Descriptors.FieldDescriptor field, Object value)

public InputDataConfig.Builder setField(Descriptors.FieldDescriptor field, Object value)
Parameters
NameDescription
fieldFieldDescriptor
valueObject
Returns
TypeDescription
InputDataConfig.Builder
Overrides

setFilterSplit(FilterSplit value)

public InputDataConfig.Builder setFilterSplit(FilterSplit value)

Split based on the provided filters for each set.

.google.cloud.aiplatform.v1.FilterSplit filter_split = 3;

Parameter
NameDescription
valueFilterSplit
Returns
TypeDescription
InputDataConfig.Builder

setFilterSplit(FilterSplit.Builder builderForValue)

public InputDataConfig.Builder setFilterSplit(FilterSplit.Builder builderForValue)

Split based on the provided filters for each set.

.google.cloud.aiplatform.v1.FilterSplit filter_split = 3;

Parameter
NameDescription
builderForValueFilterSplit.Builder
Returns
TypeDescription
InputDataConfig.Builder

setFractionSplit(FractionSplit value)

public InputDataConfig.Builder setFractionSplit(FractionSplit value)

Split based on fractions defining the size of each set.

.google.cloud.aiplatform.v1.FractionSplit fraction_split = 2;

Parameter
NameDescription
valueFractionSplit
Returns
TypeDescription
InputDataConfig.Builder

setFractionSplit(FractionSplit.Builder builderForValue)

public InputDataConfig.Builder setFractionSplit(FractionSplit.Builder builderForValue)

Split based on fractions defining the size of each set.

.google.cloud.aiplatform.v1.FractionSplit fraction_split = 2;

Parameter
NameDescription
builderForValueFractionSplit.Builder
Returns
TypeDescription
InputDataConfig.Builder

setGcsDestination(GcsDestination value)

public InputDataConfig.Builder setGcsDestination(GcsDestination value)

The Cloud Storage location where the training data is to be written to. In the given directory a new directory is created with name: dataset-<dataset-id>-<annotation-type>-<timestamp-of-training-call> where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. All training input data is written into that directory. The Vertex AI environment variables representing Cloud Storage data URIs are represented in the Cloud Storage wildcard format to support sharded data. e.g.: "gs://.../training-*.jsonl"

  • AIP_DATA_FORMAT = "jsonl" for non-tabular data, "csv" for tabular data
  • AIP_TRAINING_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/training-*.${AIP_DATA_FORMAT}"
  • AIP_VALIDATION_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/validation-*.${AIP_DATA_FORMAT}"
  • AIP_TEST_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/test-*.${AIP_DATA_FORMAT}"

.google.cloud.aiplatform.v1.GcsDestination gcs_destination = 8;

Parameter
NameDescription
valueGcsDestination
Returns
TypeDescription
InputDataConfig.Builder

setGcsDestination(GcsDestination.Builder builderForValue)

public InputDataConfig.Builder setGcsDestination(GcsDestination.Builder builderForValue)

The Cloud Storage location where the training data is to be written to. In the given directory a new directory is created with name: dataset-<dataset-id>-<annotation-type>-<timestamp-of-training-call> where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. All training input data is written into that directory. The Vertex AI environment variables representing Cloud Storage data URIs are represented in the Cloud Storage wildcard format to support sharded data. e.g.: "gs://.../training-*.jsonl"

  • AIP_DATA_FORMAT = "jsonl" for non-tabular data, "csv" for tabular data
  • AIP_TRAINING_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/training-*.${AIP_DATA_FORMAT}"
  • AIP_VALIDATION_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/validation-*.${AIP_DATA_FORMAT}"
  • AIP_TEST_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/test-*.${AIP_DATA_FORMAT}"

.google.cloud.aiplatform.v1.GcsDestination gcs_destination = 8;

Parameter
NameDescription
builderForValueGcsDestination.Builder
Returns
TypeDescription
InputDataConfig.Builder

setPersistMlUseAssignment(boolean value)

public InputDataConfig.Builder setPersistMlUseAssignment(boolean value)

Whether to persist the ML use assignment to data item system labels.

bool persist_ml_use_assignment = 11;

Parameter
NameDescription
valueboolean

The persistMlUseAssignment to set.

Returns
TypeDescription
InputDataConfig.Builder

This builder for chaining.

setPredefinedSplit(PredefinedSplit value)

public InputDataConfig.Builder setPredefinedSplit(PredefinedSplit value)

Supported only for tabular Datasets. Split based on a predefined key.

.google.cloud.aiplatform.v1.PredefinedSplit predefined_split = 4;

Parameter
NameDescription
valuePredefinedSplit
Returns
TypeDescription
InputDataConfig.Builder

setPredefinedSplit(PredefinedSplit.Builder builderForValue)

public InputDataConfig.Builder setPredefinedSplit(PredefinedSplit.Builder builderForValue)

Supported only for tabular Datasets. Split based on a predefined key.

.google.cloud.aiplatform.v1.PredefinedSplit predefined_split = 4;

Parameter
NameDescription
builderForValuePredefinedSplit.Builder
Returns
TypeDescription
InputDataConfig.Builder

setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)

public InputDataConfig.Builder setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)
Parameters
NameDescription
fieldFieldDescriptor
indexint
valueObject
Returns
TypeDescription
InputDataConfig.Builder
Overrides

setSavedQueryId(String value)

public InputDataConfig.Builder setSavedQueryId(String value)

Only applicable to Datasets that have SavedQueries. The ID of a SavedQuery (annotation set) under the Dataset specified by dataset_id used for filtering Annotations for training. Only Annotations that are associated with this SavedQuery are used in respectively training. When used in conjunction with annotations_filter, the Annotations used for training are filtered by both saved_query_id and annotations_filter. Only one of saved_query_id and annotation_schema_uri should be specified as both of them represent the same thing: problem type.

string saved_query_id = 7;

Parameter
NameDescription
valueString

The savedQueryId to set.

Returns
TypeDescription
InputDataConfig.Builder

This builder for chaining.

setSavedQueryIdBytes(ByteString value)

public InputDataConfig.Builder setSavedQueryIdBytes(ByteString value)

Only applicable to Datasets that have SavedQueries. The ID of a SavedQuery (annotation set) under the Dataset specified by dataset_id used for filtering Annotations for training. Only Annotations that are associated with this SavedQuery are used in respectively training. When used in conjunction with annotations_filter, the Annotations used for training are filtered by both saved_query_id and annotations_filter. Only one of saved_query_id and annotation_schema_uri should be specified as both of them represent the same thing: problem type.

string saved_query_id = 7;

Parameter
NameDescription
valueByteString

The bytes for savedQueryId to set.

Returns
TypeDescription
InputDataConfig.Builder

This builder for chaining.

setStratifiedSplit(StratifiedSplit value)

public InputDataConfig.Builder setStratifiedSplit(StratifiedSplit value)

Supported only for tabular Datasets. Split based on the distribution of the specified column.

.google.cloud.aiplatform.v1.StratifiedSplit stratified_split = 12;

Parameter
NameDescription
valueStratifiedSplit
Returns
TypeDescription
InputDataConfig.Builder

setStratifiedSplit(StratifiedSplit.Builder builderForValue)

public InputDataConfig.Builder setStratifiedSplit(StratifiedSplit.Builder builderForValue)

Supported only for tabular Datasets. Split based on the distribution of the specified column.

.google.cloud.aiplatform.v1.StratifiedSplit stratified_split = 12;

Parameter
NameDescription
builderForValueStratifiedSplit.Builder
Returns
TypeDescription
InputDataConfig.Builder

setTimestampSplit(TimestampSplit value)

public InputDataConfig.Builder setTimestampSplit(TimestampSplit value)

Supported only for tabular Datasets. Split based on the timestamp of the input data pieces.

.google.cloud.aiplatform.v1.TimestampSplit timestamp_split = 5;

Parameter
NameDescription
valueTimestampSplit
Returns
TypeDescription
InputDataConfig.Builder

setTimestampSplit(TimestampSplit.Builder builderForValue)

public InputDataConfig.Builder setTimestampSplit(TimestampSplit.Builder builderForValue)

Supported only for tabular Datasets. Split based on the timestamp of the input data pieces.

.google.cloud.aiplatform.v1.TimestampSplit timestamp_split = 5;

Parameter
NameDescription
builderForValueTimestampSplit.Builder
Returns
TypeDescription
InputDataConfig.Builder

setUnknownFields(UnknownFieldSet unknownFields)

public final InputDataConfig.Builder setUnknownFields(UnknownFieldSet unknownFields)
Parameter
NameDescription
unknownFieldsUnknownFieldSet
Returns
TypeDescription
InputDataConfig.Builder
Overrides