public static final class ModelMonitoringSchema.Builder extends GeneratedMessageV3.Builder<ModelMonitoringSchema.Builder> implements ModelMonitoringSchemaOrBuilder
The Model Monitoring Schema definition.
Protobuf type google.cloud.aiplatform.v1beta1.ModelMonitoringSchema
Inherited Members
com.google.protobuf.GeneratedMessageV3.Builder.getUnknownFieldSetBuilder()
com.google.protobuf.GeneratedMessageV3.Builder.internalGetMapFieldReflection(int)
com.google.protobuf.GeneratedMessageV3.Builder.internalGetMutableMapFieldReflection(int)
com.google.protobuf.GeneratedMessageV3.Builder.mergeUnknownLengthDelimitedField(int,com.google.protobuf.ByteString)
com.google.protobuf.GeneratedMessageV3.Builder.mergeUnknownVarintField(int,int)
com.google.protobuf.GeneratedMessageV3.Builder.parseUnknownField(com.google.protobuf.CodedInputStream,com.google.protobuf.ExtensionRegistryLite,int)
com.google.protobuf.GeneratedMessageV3.Builder.setUnknownFieldSetBuilder(com.google.protobuf.UnknownFieldSet.Builder)
Static Methods
public static final Descriptors.Descriptor getDescriptor()
Methods
public ModelMonitoringSchema.Builder addAllFeatureFields(Iterable<? extends ModelMonitoringSchema.FieldSchema> values)
Feature names of the model. Vertex AI will try to match the features from
your dataset as follows:
- For 'csv' files, the header names are required, and we will extract the
corresponding feature values when the header names align with the
feature names.
- For 'jsonl' files, we will extract the corresponding feature values if
the key names match the feature names.
Note: Nested features are not supported, so please ensure your features
are flattened. Ensure the feature values are scalar or an array of
scalars.
- For 'bigquery' dataset, we will extract the corresponding feature values
if the column names match the feature names.
Note: The column type can be a scalar or an array of scalars. STRUCT or
JSON types are not supported. You may use SQL queries to select or
aggregate the relevant features from your original table. However,
ensure that the 'schema' of the query results meets our requirements.
- For the Vertex AI Endpoint Request Response Logging table or Vertex AI
Batch Prediction Job results. If the
instance_type
is an array, ensure that the sequence in
feature_fields
matches the order of features in the prediction instance. We will match
the feature with the array in the order specified in [feature_fields].
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema feature_fields = 1;
Parameter |
Name |
Description |
values |
Iterable<? extends com.google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema>
|
public ModelMonitoringSchema.Builder addAllGroundTruthFields(Iterable<? extends ModelMonitoringSchema.FieldSchema> values)
Target /ground truth names of the model.
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema ground_truth_fields = 3;
Parameter |
Name |
Description |
values |
Iterable<? extends com.google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema>
|
public ModelMonitoringSchema.Builder addAllPredictionFields(Iterable<? extends ModelMonitoringSchema.FieldSchema> values)
Prediction output names of the model. The requirements are the same as the
feature_fields.
For AutoML Tables, the prediction output name presented in schema will be:
predicted_{target_column}
, the target_column
is the one you specified
when you train the model.
For Prediction output drift analysis:
- AutoML Classification, the distribution of the argmax label will be
analyzed.
- AutoML Regression, the distribution of the value will be analyzed.
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema prediction_fields = 2;
Parameter |
Name |
Description |
values |
Iterable<? extends com.google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema>
|
public ModelMonitoringSchema.Builder addFeatureFields(ModelMonitoringSchema.FieldSchema value)
Feature names of the model. Vertex AI will try to match the features from
your dataset as follows:
- For 'csv' files, the header names are required, and we will extract the
corresponding feature values when the header names align with the
feature names.
- For 'jsonl' files, we will extract the corresponding feature values if
the key names match the feature names.
Note: Nested features are not supported, so please ensure your features
are flattened. Ensure the feature values are scalar or an array of
scalars.
- For 'bigquery' dataset, we will extract the corresponding feature values
if the column names match the feature names.
Note: The column type can be a scalar or an array of scalars. STRUCT or
JSON types are not supported. You may use SQL queries to select or
aggregate the relevant features from your original table. However,
ensure that the 'schema' of the query results meets our requirements.
- For the Vertex AI Endpoint Request Response Logging table or Vertex AI
Batch Prediction Job results. If the
instance_type
is an array, ensure that the sequence in
feature_fields
matches the order of features in the prediction instance. We will match
the feature with the array in the order specified in [feature_fields].
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema feature_fields = 1;
public ModelMonitoringSchema.Builder addFeatureFields(ModelMonitoringSchema.FieldSchema.Builder builderForValue)
Feature names of the model. Vertex AI will try to match the features from
your dataset as follows:
- For 'csv' files, the header names are required, and we will extract the
corresponding feature values when the header names align with the
feature names.
- For 'jsonl' files, we will extract the corresponding feature values if
the key names match the feature names.
Note: Nested features are not supported, so please ensure your features
are flattened. Ensure the feature values are scalar or an array of
scalars.
- For 'bigquery' dataset, we will extract the corresponding feature values
if the column names match the feature names.
Note: The column type can be a scalar or an array of scalars. STRUCT or
JSON types are not supported. You may use SQL queries to select or
aggregate the relevant features from your original table. However,
ensure that the 'schema' of the query results meets our requirements.
- For the Vertex AI Endpoint Request Response Logging table or Vertex AI
Batch Prediction Job results. If the
instance_type
is an array, ensure that the sequence in
feature_fields
matches the order of features in the prediction instance. We will match
the feature with the array in the order specified in [feature_fields].
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema feature_fields = 1;
public ModelMonitoringSchema.Builder addFeatureFields(int index, ModelMonitoringSchema.FieldSchema value)
Feature names of the model. Vertex AI will try to match the features from
your dataset as follows:
- For 'csv' files, the header names are required, and we will extract the
corresponding feature values when the header names align with the
feature names.
- For 'jsonl' files, we will extract the corresponding feature values if
the key names match the feature names.
Note: Nested features are not supported, so please ensure your features
are flattened. Ensure the feature values are scalar or an array of
scalars.
- For 'bigquery' dataset, we will extract the corresponding feature values
if the column names match the feature names.
Note: The column type can be a scalar or an array of scalars. STRUCT or
JSON types are not supported. You may use SQL queries to select or
aggregate the relevant features from your original table. However,
ensure that the 'schema' of the query results meets our requirements.
- For the Vertex AI Endpoint Request Response Logging table or Vertex AI
Batch Prediction Job results. If the
instance_type
is an array, ensure that the sequence in
feature_fields
matches the order of features in the prediction instance. We will match
the feature with the array in the order specified in [feature_fields].
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema feature_fields = 1;
public ModelMonitoringSchema.Builder addFeatureFields(int index, ModelMonitoringSchema.FieldSchema.Builder builderForValue)
Feature names of the model. Vertex AI will try to match the features from
your dataset as follows:
- For 'csv' files, the header names are required, and we will extract the
corresponding feature values when the header names align with the
feature names.
- For 'jsonl' files, we will extract the corresponding feature values if
the key names match the feature names.
Note: Nested features are not supported, so please ensure your features
are flattened. Ensure the feature values are scalar or an array of
scalars.
- For 'bigquery' dataset, we will extract the corresponding feature values
if the column names match the feature names.
Note: The column type can be a scalar or an array of scalars. STRUCT or
JSON types are not supported. You may use SQL queries to select or
aggregate the relevant features from your original table. However,
ensure that the 'schema' of the query results meets our requirements.
- For the Vertex AI Endpoint Request Response Logging table or Vertex AI
Batch Prediction Job results. If the
instance_type
is an array, ensure that the sequence in
feature_fields
matches the order of features in the prediction instance. We will match
the feature with the array in the order specified in [feature_fields].
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema feature_fields = 1;
public ModelMonitoringSchema.FieldSchema.Builder addFeatureFieldsBuilder()
Feature names of the model. Vertex AI will try to match the features from
your dataset as follows:
- For 'csv' files, the header names are required, and we will extract the
corresponding feature values when the header names align with the
feature names.
- For 'jsonl' files, we will extract the corresponding feature values if
the key names match the feature names.
Note: Nested features are not supported, so please ensure your features
are flattened. Ensure the feature values are scalar or an array of
scalars.
- For 'bigquery' dataset, we will extract the corresponding feature values
if the column names match the feature names.
Note: The column type can be a scalar or an array of scalars. STRUCT or
JSON types are not supported. You may use SQL queries to select or
aggregate the relevant features from your original table. However,
ensure that the 'schema' of the query results meets our requirements.
- For the Vertex AI Endpoint Request Response Logging table or Vertex AI
Batch Prediction Job results. If the
instance_type
is an array, ensure that the sequence in
feature_fields
matches the order of features in the prediction instance. We will match
the feature with the array in the order specified in [feature_fields].
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema feature_fields = 1;
public ModelMonitoringSchema.FieldSchema.Builder addFeatureFieldsBuilder(int index)
Feature names of the model. Vertex AI will try to match the features from
your dataset as follows:
- For 'csv' files, the header names are required, and we will extract the
corresponding feature values when the header names align with the
feature names.
- For 'jsonl' files, we will extract the corresponding feature values if
the key names match the feature names.
Note: Nested features are not supported, so please ensure your features
are flattened. Ensure the feature values are scalar or an array of
scalars.
- For 'bigquery' dataset, we will extract the corresponding feature values
if the column names match the feature names.
Note: The column type can be a scalar or an array of scalars. STRUCT or
JSON types are not supported. You may use SQL queries to select or
aggregate the relevant features from your original table. However,
ensure that the 'schema' of the query results meets our requirements.
- For the Vertex AI Endpoint Request Response Logging table or Vertex AI
Batch Prediction Job results. If the
instance_type
is an array, ensure that the sequence in
feature_fields
matches the order of features in the prediction instance. We will match
the feature with the array in the order specified in [feature_fields].
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema feature_fields = 1;
Parameter |
Name |
Description |
index |
int
|
public ModelMonitoringSchema.Builder addGroundTruthFields(ModelMonitoringSchema.FieldSchema value)
Target /ground truth names of the model.
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema ground_truth_fields = 3;
public ModelMonitoringSchema.Builder addGroundTruthFields(ModelMonitoringSchema.FieldSchema.Builder builderForValue)
Target /ground truth names of the model.
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema ground_truth_fields = 3;
public ModelMonitoringSchema.Builder addGroundTruthFields(int index, ModelMonitoringSchema.FieldSchema value)
Target /ground truth names of the model.
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema ground_truth_fields = 3;
public ModelMonitoringSchema.Builder addGroundTruthFields(int index, ModelMonitoringSchema.FieldSchema.Builder builderForValue)
Target /ground truth names of the model.
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema ground_truth_fields = 3;
public ModelMonitoringSchema.FieldSchema.Builder addGroundTruthFieldsBuilder()
Target /ground truth names of the model.
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema ground_truth_fields = 3;
public ModelMonitoringSchema.FieldSchema.Builder addGroundTruthFieldsBuilder(int index)
Target /ground truth names of the model.
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema ground_truth_fields = 3;
Parameter |
Name |
Description |
index |
int
|
public ModelMonitoringSchema.Builder addPredictionFields(ModelMonitoringSchema.FieldSchema value)
Prediction output names of the model. The requirements are the same as the
feature_fields.
For AutoML Tables, the prediction output name presented in schema will be:
predicted_{target_column}
, the target_column
is the one you specified
when you train the model.
For Prediction output drift analysis:
- AutoML Classification, the distribution of the argmax label will be
analyzed.
- AutoML Regression, the distribution of the value will be analyzed.
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema prediction_fields = 2;
public ModelMonitoringSchema.Builder addPredictionFields(ModelMonitoringSchema.FieldSchema.Builder builderForValue)
Prediction output names of the model. The requirements are the same as the
feature_fields.
For AutoML Tables, the prediction output name presented in schema will be:
predicted_{target_column}
, the target_column
is the one you specified
when you train the model.
For Prediction output drift analysis:
- AutoML Classification, the distribution of the argmax label will be
analyzed.
- AutoML Regression, the distribution of the value will be analyzed.
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema prediction_fields = 2;
public ModelMonitoringSchema.Builder addPredictionFields(int index, ModelMonitoringSchema.FieldSchema value)
Prediction output names of the model. The requirements are the same as the
feature_fields.
For AutoML Tables, the prediction output name presented in schema will be:
predicted_{target_column}
, the target_column
is the one you specified
when you train the model.
For Prediction output drift analysis:
- AutoML Classification, the distribution of the argmax label will be
analyzed.
- AutoML Regression, the distribution of the value will be analyzed.
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema prediction_fields = 2;
public ModelMonitoringSchema.Builder addPredictionFields(int index, ModelMonitoringSchema.FieldSchema.Builder builderForValue)
Prediction output names of the model. The requirements are the same as the
feature_fields.
For AutoML Tables, the prediction output name presented in schema will be:
predicted_{target_column}
, the target_column
is the one you specified
when you train the model.
For Prediction output drift analysis:
- AutoML Classification, the distribution of the argmax label will be
analyzed.
- AutoML Regression, the distribution of the value will be analyzed.
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema prediction_fields = 2;
public ModelMonitoringSchema.FieldSchema.Builder addPredictionFieldsBuilder()
Prediction output names of the model. The requirements are the same as the
feature_fields.
For AutoML Tables, the prediction output name presented in schema will be:
predicted_{target_column}
, the target_column
is the one you specified
when you train the model.
For Prediction output drift analysis:
- AutoML Classification, the distribution of the argmax label will be
analyzed.
- AutoML Regression, the distribution of the value will be analyzed.
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema prediction_fields = 2;
public ModelMonitoringSchema.FieldSchema.Builder addPredictionFieldsBuilder(int index)
Prediction output names of the model. The requirements are the same as the
feature_fields.
For AutoML Tables, the prediction output name presented in schema will be:
predicted_{target_column}
, the target_column
is the one you specified
when you train the model.
For Prediction output drift analysis:
- AutoML Classification, the distribution of the argmax label will be
analyzed.
- AutoML Regression, the distribution of the value will be analyzed.
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema prediction_fields = 2;
Parameter |
Name |
Description |
index |
int
|
public ModelMonitoringSchema.Builder addRepeatedField(Descriptors.FieldDescriptor field, Object value)
Overrides
public ModelMonitoringSchema build()
public ModelMonitoringSchema buildPartial()
public ModelMonitoringSchema.Builder clear()
Overrides
public ModelMonitoringSchema.Builder clearFeatureFields()
Feature names of the model. Vertex AI will try to match the features from
your dataset as follows:
- For 'csv' files, the header names are required, and we will extract the
corresponding feature values when the header names align with the
feature names.
- For 'jsonl' files, we will extract the corresponding feature values if
the key names match the feature names.
Note: Nested features are not supported, so please ensure your features
are flattened. Ensure the feature values are scalar or an array of
scalars.
- For 'bigquery' dataset, we will extract the corresponding feature values
if the column names match the feature names.
Note: The column type can be a scalar or an array of scalars. STRUCT or
JSON types are not supported. You may use SQL queries to select or
aggregate the relevant features from your original table. However,
ensure that the 'schema' of the query results meets our requirements.
- For the Vertex AI Endpoint Request Response Logging table or Vertex AI
Batch Prediction Job results. If the
instance_type
is an array, ensure that the sequence in
feature_fields
matches the order of features in the prediction instance. We will match
the feature with the array in the order specified in [feature_fields].
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema feature_fields = 1;
public ModelMonitoringSchema.Builder clearField(Descriptors.FieldDescriptor field)
Overrides
public ModelMonitoringSchema.Builder clearGroundTruthFields()
Target /ground truth names of the model.
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema ground_truth_fields = 3;
public ModelMonitoringSchema.Builder clearOneof(Descriptors.OneofDescriptor oneof)
Overrides
public ModelMonitoringSchema.Builder clearPredictionFields()
Prediction output names of the model. The requirements are the same as the
feature_fields.
For AutoML Tables, the prediction output name presented in schema will be:
predicted_{target_column}
, the target_column
is the one you specified
when you train the model.
For Prediction output drift analysis:
- AutoML Classification, the distribution of the argmax label will be
analyzed.
- AutoML Regression, the distribution of the value will be analyzed.
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema prediction_fields = 2;
public ModelMonitoringSchema.Builder clone()
Overrides
public ModelMonitoringSchema getDefaultInstanceForType()
public Descriptors.Descriptor getDescriptorForType()
Overrides
public ModelMonitoringSchema.FieldSchema getFeatureFields(int index)
Feature names of the model. Vertex AI will try to match the features from
your dataset as follows:
- For 'csv' files, the header names are required, and we will extract the
corresponding feature values when the header names align with the
feature names.
- For 'jsonl' files, we will extract the corresponding feature values if
the key names match the feature names.
Note: Nested features are not supported, so please ensure your features
are flattened. Ensure the feature values are scalar or an array of
scalars.
- For 'bigquery' dataset, we will extract the corresponding feature values
if the column names match the feature names.
Note: The column type can be a scalar or an array of scalars. STRUCT or
JSON types are not supported. You may use SQL queries to select or
aggregate the relevant features from your original table. However,
ensure that the 'schema' of the query results meets our requirements.
- For the Vertex AI Endpoint Request Response Logging table or Vertex AI
Batch Prediction Job results. If the
instance_type
is an array, ensure that the sequence in
feature_fields
matches the order of features in the prediction instance. We will match
the feature with the array in the order specified in [feature_fields].
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema feature_fields = 1;
Parameter |
Name |
Description |
index |
int
|
public ModelMonitoringSchema.FieldSchema.Builder getFeatureFieldsBuilder(int index)
Feature names of the model. Vertex AI will try to match the features from
your dataset as follows:
- For 'csv' files, the header names are required, and we will extract the
corresponding feature values when the header names align with the
feature names.
- For 'jsonl' files, we will extract the corresponding feature values if
the key names match the feature names.
Note: Nested features are not supported, so please ensure your features
are flattened. Ensure the feature values are scalar or an array of
scalars.
- For 'bigquery' dataset, we will extract the corresponding feature values
if the column names match the feature names.
Note: The column type can be a scalar or an array of scalars. STRUCT or
JSON types are not supported. You may use SQL queries to select or
aggregate the relevant features from your original table. However,
ensure that the 'schema' of the query results meets our requirements.
- For the Vertex AI Endpoint Request Response Logging table or Vertex AI
Batch Prediction Job results. If the
instance_type
is an array, ensure that the sequence in
feature_fields
matches the order of features in the prediction instance. We will match
the feature with the array in the order specified in [feature_fields].
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema feature_fields = 1;
Parameter |
Name |
Description |
index |
int
|
public List<ModelMonitoringSchema.FieldSchema.Builder> getFeatureFieldsBuilderList()
Feature names of the model. Vertex AI will try to match the features from
your dataset as follows:
- For 'csv' files, the header names are required, and we will extract the
corresponding feature values when the header names align with the
feature names.
- For 'jsonl' files, we will extract the corresponding feature values if
the key names match the feature names.
Note: Nested features are not supported, so please ensure your features
are flattened. Ensure the feature values are scalar or an array of
scalars.
- For 'bigquery' dataset, we will extract the corresponding feature values
if the column names match the feature names.
Note: The column type can be a scalar or an array of scalars. STRUCT or
JSON types are not supported. You may use SQL queries to select or
aggregate the relevant features from your original table. However,
ensure that the 'schema' of the query results meets our requirements.
- For the Vertex AI Endpoint Request Response Logging table or Vertex AI
Batch Prediction Job results. If the
instance_type
is an array, ensure that the sequence in
feature_fields
matches the order of features in the prediction instance. We will match
the feature with the array in the order specified in [feature_fields].
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema feature_fields = 1;
public int getFeatureFieldsCount()
Feature names of the model. Vertex AI will try to match the features from
your dataset as follows:
- For 'csv' files, the header names are required, and we will extract the
corresponding feature values when the header names align with the
feature names.
- For 'jsonl' files, we will extract the corresponding feature values if
the key names match the feature names.
Note: Nested features are not supported, so please ensure your features
are flattened. Ensure the feature values are scalar or an array of
scalars.
- For 'bigquery' dataset, we will extract the corresponding feature values
if the column names match the feature names.
Note: The column type can be a scalar or an array of scalars. STRUCT or
JSON types are not supported. You may use SQL queries to select or
aggregate the relevant features from your original table. However,
ensure that the 'schema' of the query results meets our requirements.
- For the Vertex AI Endpoint Request Response Logging table or Vertex AI
Batch Prediction Job results. If the
instance_type
is an array, ensure that the sequence in
feature_fields
matches the order of features in the prediction instance. We will match
the feature with the array in the order specified in [feature_fields].
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema feature_fields = 1;
Returns |
Type |
Description |
int |
|
public List<ModelMonitoringSchema.FieldSchema> getFeatureFieldsList()
Feature names of the model. Vertex AI will try to match the features from
your dataset as follows:
- For 'csv' files, the header names are required, and we will extract the
corresponding feature values when the header names align with the
feature names.
- For 'jsonl' files, we will extract the corresponding feature values if
the key names match the feature names.
Note: Nested features are not supported, so please ensure your features
are flattened. Ensure the feature values are scalar or an array of
scalars.
- For 'bigquery' dataset, we will extract the corresponding feature values
if the column names match the feature names.
Note: The column type can be a scalar or an array of scalars. STRUCT or
JSON types are not supported. You may use SQL queries to select or
aggregate the relevant features from your original table. However,
ensure that the 'schema' of the query results meets our requirements.
- For the Vertex AI Endpoint Request Response Logging table or Vertex AI
Batch Prediction Job results. If the
instance_type
is an array, ensure that the sequence in
feature_fields
matches the order of features in the prediction instance. We will match
the feature with the array in the order specified in [feature_fields].
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema feature_fields = 1;
public ModelMonitoringSchema.FieldSchemaOrBuilder getFeatureFieldsOrBuilder(int index)
Feature names of the model. Vertex AI will try to match the features from
your dataset as follows:
- For 'csv' files, the header names are required, and we will extract the
corresponding feature values when the header names align with the
feature names.
- For 'jsonl' files, we will extract the corresponding feature values if
the key names match the feature names.
Note: Nested features are not supported, so please ensure your features
are flattened. Ensure the feature values are scalar or an array of
scalars.
- For 'bigquery' dataset, we will extract the corresponding feature values
if the column names match the feature names.
Note: The column type can be a scalar or an array of scalars. STRUCT or
JSON types are not supported. You may use SQL queries to select or
aggregate the relevant features from your original table. However,
ensure that the 'schema' of the query results meets our requirements.
- For the Vertex AI Endpoint Request Response Logging table or Vertex AI
Batch Prediction Job results. If the
instance_type
is an array, ensure that the sequence in
feature_fields
matches the order of features in the prediction instance. We will match
the feature with the array in the order specified in [feature_fields].
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema feature_fields = 1;
Parameter |
Name |
Description |
index |
int
|
public List<? extends ModelMonitoringSchema.FieldSchemaOrBuilder> getFeatureFieldsOrBuilderList()
Feature names of the model. Vertex AI will try to match the features from
your dataset as follows:
- For 'csv' files, the header names are required, and we will extract the
corresponding feature values when the header names align with the
feature names.
- For 'jsonl' files, we will extract the corresponding feature values if
the key names match the feature names.
Note: Nested features are not supported, so please ensure your features
are flattened. Ensure the feature values are scalar or an array of
scalars.
- For 'bigquery' dataset, we will extract the corresponding feature values
if the column names match the feature names.
Note: The column type can be a scalar or an array of scalars. STRUCT or
JSON types are not supported. You may use SQL queries to select or
aggregate the relevant features from your original table. However,
ensure that the 'schema' of the query results meets our requirements.
- For the Vertex AI Endpoint Request Response Logging table or Vertex AI
Batch Prediction Job results. If the
instance_type
is an array, ensure that the sequence in
feature_fields
matches the order of features in the prediction instance. We will match
the feature with the array in the order specified in [feature_fields].
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema feature_fields = 1;
Returns |
Type |
Description |
List<? extends com.google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchemaOrBuilder> |
|
public ModelMonitoringSchema.FieldSchema getGroundTruthFields(int index)
Target /ground truth names of the model.
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema ground_truth_fields = 3;
Parameter |
Name |
Description |
index |
int
|
public ModelMonitoringSchema.FieldSchema.Builder getGroundTruthFieldsBuilder(int index)
Target /ground truth names of the model.
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema ground_truth_fields = 3;
Parameter |
Name |
Description |
index |
int
|
public List<ModelMonitoringSchema.FieldSchema.Builder> getGroundTruthFieldsBuilderList()
Target /ground truth names of the model.
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema ground_truth_fields = 3;
public int getGroundTruthFieldsCount()
Target /ground truth names of the model.
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema ground_truth_fields = 3;
Returns |
Type |
Description |
int |
|
public List<ModelMonitoringSchema.FieldSchema> getGroundTruthFieldsList()
Target /ground truth names of the model.
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema ground_truth_fields = 3;
public ModelMonitoringSchema.FieldSchemaOrBuilder getGroundTruthFieldsOrBuilder(int index)
Target /ground truth names of the model.
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema ground_truth_fields = 3;
Parameter |
Name |
Description |
index |
int
|
public List<? extends ModelMonitoringSchema.FieldSchemaOrBuilder> getGroundTruthFieldsOrBuilderList()
Target /ground truth names of the model.
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema ground_truth_fields = 3;
Returns |
Type |
Description |
List<? extends com.google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchemaOrBuilder> |
|
public ModelMonitoringSchema.FieldSchema getPredictionFields(int index)
Prediction output names of the model. The requirements are the same as the
feature_fields.
For AutoML Tables, the prediction output name presented in schema will be:
predicted_{target_column}
, the target_column
is the one you specified
when you train the model.
For Prediction output drift analysis:
- AutoML Classification, the distribution of the argmax label will be
analyzed.
- AutoML Regression, the distribution of the value will be analyzed.
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema prediction_fields = 2;
Parameter |
Name |
Description |
index |
int
|
public ModelMonitoringSchema.FieldSchema.Builder getPredictionFieldsBuilder(int index)
Prediction output names of the model. The requirements are the same as the
feature_fields.
For AutoML Tables, the prediction output name presented in schema will be:
predicted_{target_column}
, the target_column
is the one you specified
when you train the model.
For Prediction output drift analysis:
- AutoML Classification, the distribution of the argmax label will be
analyzed.
- AutoML Regression, the distribution of the value will be analyzed.
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema prediction_fields = 2;
Parameter |
Name |
Description |
index |
int
|
public List<ModelMonitoringSchema.FieldSchema.Builder> getPredictionFieldsBuilderList()
Prediction output names of the model. The requirements are the same as the
feature_fields.
For AutoML Tables, the prediction output name presented in schema will be:
predicted_{target_column}
, the target_column
is the one you specified
when you train the model.
For Prediction output drift analysis:
- AutoML Classification, the distribution of the argmax label will be
analyzed.
- AutoML Regression, the distribution of the value will be analyzed.
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema prediction_fields = 2;
public int getPredictionFieldsCount()
Prediction output names of the model. The requirements are the same as the
feature_fields.
For AutoML Tables, the prediction output name presented in schema will be:
predicted_{target_column}
, the target_column
is the one you specified
when you train the model.
For Prediction output drift analysis:
- AutoML Classification, the distribution of the argmax label will be
analyzed.
- AutoML Regression, the distribution of the value will be analyzed.
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema prediction_fields = 2;
Returns |
Type |
Description |
int |
|
public List<ModelMonitoringSchema.FieldSchema> getPredictionFieldsList()
Prediction output names of the model. The requirements are the same as the
feature_fields.
For AutoML Tables, the prediction output name presented in schema will be:
predicted_{target_column}
, the target_column
is the one you specified
when you train the model.
For Prediction output drift analysis:
- AutoML Classification, the distribution of the argmax label will be
analyzed.
- AutoML Regression, the distribution of the value will be analyzed.
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema prediction_fields = 2;
public ModelMonitoringSchema.FieldSchemaOrBuilder getPredictionFieldsOrBuilder(int index)
Prediction output names of the model. The requirements are the same as the
feature_fields.
For AutoML Tables, the prediction output name presented in schema will be:
predicted_{target_column}
, the target_column
is the one you specified
when you train the model.
For Prediction output drift analysis:
- AutoML Classification, the distribution of the argmax label will be
analyzed.
- AutoML Regression, the distribution of the value will be analyzed.
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema prediction_fields = 2;
Parameter |
Name |
Description |
index |
int
|
public List<? extends ModelMonitoringSchema.FieldSchemaOrBuilder> getPredictionFieldsOrBuilderList()
Prediction output names of the model. The requirements are the same as the
feature_fields.
For AutoML Tables, the prediction output name presented in schema will be:
predicted_{target_column}
, the target_column
is the one you specified
when you train the model.
For Prediction output drift analysis:
- AutoML Classification, the distribution of the argmax label will be
analyzed.
- AutoML Regression, the distribution of the value will be analyzed.
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema prediction_fields = 2;
Returns |
Type |
Description |
List<? extends com.google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchemaOrBuilder> |
|
protected GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()
Overrides
public final boolean isInitialized()
Overrides
public ModelMonitoringSchema.Builder mergeFrom(ModelMonitoringSchema other)
public ModelMonitoringSchema.Builder mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)
Overrides
public ModelMonitoringSchema.Builder mergeFrom(Message other)
Parameter |
Name |
Description |
other |
Message
|
Overrides
public final ModelMonitoringSchema.Builder mergeUnknownFields(UnknownFieldSet unknownFields)
Overrides
public ModelMonitoringSchema.Builder removeFeatureFields(int index)
Feature names of the model. Vertex AI will try to match the features from
your dataset as follows:
- For 'csv' files, the header names are required, and we will extract the
corresponding feature values when the header names align with the
feature names.
- For 'jsonl' files, we will extract the corresponding feature values if
the key names match the feature names.
Note: Nested features are not supported, so please ensure your features
are flattened. Ensure the feature values are scalar or an array of
scalars.
- For 'bigquery' dataset, we will extract the corresponding feature values
if the column names match the feature names.
Note: The column type can be a scalar or an array of scalars. STRUCT or
JSON types are not supported. You may use SQL queries to select or
aggregate the relevant features from your original table. However,
ensure that the 'schema' of the query results meets our requirements.
- For the Vertex AI Endpoint Request Response Logging table or Vertex AI
Batch Prediction Job results. If the
instance_type
is an array, ensure that the sequence in
feature_fields
matches the order of features in the prediction instance. We will match
the feature with the array in the order specified in [feature_fields].
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema feature_fields = 1;
Parameter |
Name |
Description |
index |
int
|
public ModelMonitoringSchema.Builder removeGroundTruthFields(int index)
Target /ground truth names of the model.
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema ground_truth_fields = 3;
Parameter |
Name |
Description |
index |
int
|
public ModelMonitoringSchema.Builder removePredictionFields(int index)
Prediction output names of the model. The requirements are the same as the
feature_fields.
For AutoML Tables, the prediction output name presented in schema will be:
predicted_{target_column}
, the target_column
is the one you specified
when you train the model.
For Prediction output drift analysis:
- AutoML Classification, the distribution of the argmax label will be
analyzed.
- AutoML Regression, the distribution of the value will be analyzed.
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema prediction_fields = 2;
Parameter |
Name |
Description |
index |
int
|
public ModelMonitoringSchema.Builder setFeatureFields(int index, ModelMonitoringSchema.FieldSchema value)
Feature names of the model. Vertex AI will try to match the features from
your dataset as follows:
- For 'csv' files, the header names are required, and we will extract the
corresponding feature values when the header names align with the
feature names.
- For 'jsonl' files, we will extract the corresponding feature values if
the key names match the feature names.
Note: Nested features are not supported, so please ensure your features
are flattened. Ensure the feature values are scalar or an array of
scalars.
- For 'bigquery' dataset, we will extract the corresponding feature values
if the column names match the feature names.
Note: The column type can be a scalar or an array of scalars. STRUCT or
JSON types are not supported. You may use SQL queries to select or
aggregate the relevant features from your original table. However,
ensure that the 'schema' of the query results meets our requirements.
- For the Vertex AI Endpoint Request Response Logging table or Vertex AI
Batch Prediction Job results. If the
instance_type
is an array, ensure that the sequence in
feature_fields
matches the order of features in the prediction instance. We will match
the feature with the array in the order specified in [feature_fields].
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema feature_fields = 1;
public ModelMonitoringSchema.Builder setFeatureFields(int index, ModelMonitoringSchema.FieldSchema.Builder builderForValue)
Feature names of the model. Vertex AI will try to match the features from
your dataset as follows:
- For 'csv' files, the header names are required, and we will extract the
corresponding feature values when the header names align with the
feature names.
- For 'jsonl' files, we will extract the corresponding feature values if
the key names match the feature names.
Note: Nested features are not supported, so please ensure your features
are flattened. Ensure the feature values are scalar or an array of
scalars.
- For 'bigquery' dataset, we will extract the corresponding feature values
if the column names match the feature names.
Note: The column type can be a scalar or an array of scalars. STRUCT or
JSON types are not supported. You may use SQL queries to select or
aggregate the relevant features from your original table. However,
ensure that the 'schema' of the query results meets our requirements.
- For the Vertex AI Endpoint Request Response Logging table or Vertex AI
Batch Prediction Job results. If the
instance_type
is an array, ensure that the sequence in
feature_fields
matches the order of features in the prediction instance. We will match
the feature with the array in the order specified in [feature_fields].
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema feature_fields = 1;
public ModelMonitoringSchema.Builder setField(Descriptors.FieldDescriptor field, Object value)
Overrides
public ModelMonitoringSchema.Builder setGroundTruthFields(int index, ModelMonitoringSchema.FieldSchema value)
Target /ground truth names of the model.
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema ground_truth_fields = 3;
public ModelMonitoringSchema.Builder setGroundTruthFields(int index, ModelMonitoringSchema.FieldSchema.Builder builderForValue)
Target /ground truth names of the model.
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema ground_truth_fields = 3;
public ModelMonitoringSchema.Builder setPredictionFields(int index, ModelMonitoringSchema.FieldSchema value)
Prediction output names of the model. The requirements are the same as the
feature_fields.
For AutoML Tables, the prediction output name presented in schema will be:
predicted_{target_column}
, the target_column
is the one you specified
when you train the model.
For Prediction output drift analysis:
- AutoML Classification, the distribution of the argmax label will be
analyzed.
- AutoML Regression, the distribution of the value will be analyzed.
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema prediction_fields = 2;
public ModelMonitoringSchema.Builder setPredictionFields(int index, ModelMonitoringSchema.FieldSchema.Builder builderForValue)
Prediction output names of the model. The requirements are the same as the
feature_fields.
For AutoML Tables, the prediction output name presented in schema will be:
predicted_{target_column}
, the target_column
is the one you specified
when you train the model.
For Prediction output drift analysis:
- AutoML Classification, the distribution of the argmax label will be
analyzed.
- AutoML Regression, the distribution of the value will be analyzed.
repeated .google.cloud.aiplatform.v1beta1.ModelMonitoringSchema.FieldSchema prediction_fields = 2;
public ModelMonitoringSchema.Builder setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)
Overrides
public final ModelMonitoringSchema.Builder setUnknownFields(UnknownFieldSet unknownFields)
Overrides