Interface ModelOrBuilder (2.8.9)

public interface ModelOrBuilder extends MessageOrBuilder

Implements

MessageOrBuilder

Methods

containsLabels(String key)

public abstract boolean containsLabels(String key)

The labels with user-defined metadata to organize your Models. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.

map<string, string> labels = 17;

Parameter
NameDescription
keyString
Returns
TypeDescription
boolean

getArtifactUri()

public abstract String getArtifactUri()

Immutable. The path to the directory containing the Model artifact and any of its supporting files. Not present for AutoML Models.

string artifact_uri = 26 [(.google.api.field_behavior) = IMMUTABLE];

Returns
TypeDescription
String

The artifactUri.

getArtifactUriBytes()

public abstract ByteString getArtifactUriBytes()

Immutable. The path to the directory containing the Model artifact and any of its supporting files. Not present for AutoML Models.

string artifact_uri = 26 [(.google.api.field_behavior) = IMMUTABLE];

Returns
TypeDescription
ByteString

The bytes for artifactUri.

getContainerSpec()

public abstract ModelContainerSpec getContainerSpec()

Input only. The specification of the container that is to be used when deploying this Model. The specification is ingested upon ModelService.UploadModel, and all binaries it contains are copied and stored internally by Vertex AI. Not present for AutoML Models.

.google.cloud.aiplatform.v1.ModelContainerSpec container_spec = 9 [(.google.api.field_behavior) = INPUT_ONLY];

Returns
TypeDescription
ModelContainerSpec

The containerSpec.

getContainerSpecOrBuilder()

public abstract ModelContainerSpecOrBuilder getContainerSpecOrBuilder()

Input only. The specification of the container that is to be used when deploying this Model. The specification is ingested upon ModelService.UploadModel, and all binaries it contains are copied and stored internally by Vertex AI. Not present for AutoML Models.

.google.cloud.aiplatform.v1.ModelContainerSpec container_spec = 9 [(.google.api.field_behavior) = INPUT_ONLY];

Returns
TypeDescription
ModelContainerSpecOrBuilder

getCreateTime()

public abstract Timestamp getCreateTime()

Output only. Timestamp when this Model was uploaded into Vertex AI.

.google.protobuf.Timestamp create_time = 13 [(.google.api.field_behavior) = OUTPUT_ONLY];

Returns
TypeDescription
Timestamp

The createTime.

getCreateTimeOrBuilder()

public abstract TimestampOrBuilder getCreateTimeOrBuilder()

Output only. Timestamp when this Model was uploaded into Vertex AI.

.google.protobuf.Timestamp create_time = 13 [(.google.api.field_behavior) = OUTPUT_ONLY];

Returns
TypeDescription
TimestampOrBuilder

getDeployedModels(int index)

public abstract DeployedModelRef getDeployedModels(int index)

Output only. The pointers to DeployedModels created from this Model. Note that Model could have been deployed to Endpoints in different Locations.

repeated .google.cloud.aiplatform.v1.DeployedModelRef deployed_models = 15 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameter
NameDescription
indexint
Returns
TypeDescription
DeployedModelRef

getDeployedModelsCount()

public abstract int getDeployedModelsCount()

Output only. The pointers to DeployedModels created from this Model. Note that Model could have been deployed to Endpoints in different Locations.

repeated .google.cloud.aiplatform.v1.DeployedModelRef deployed_models = 15 [(.google.api.field_behavior) = OUTPUT_ONLY];

Returns
TypeDescription
int

getDeployedModelsList()

public abstract List<DeployedModelRef> getDeployedModelsList()

Output only. The pointers to DeployedModels created from this Model. Note that Model could have been deployed to Endpoints in different Locations.

repeated .google.cloud.aiplatform.v1.DeployedModelRef deployed_models = 15 [(.google.api.field_behavior) = OUTPUT_ONLY];

Returns
TypeDescription
List<DeployedModelRef>

getDeployedModelsOrBuilder(int index)

public abstract DeployedModelRefOrBuilder getDeployedModelsOrBuilder(int index)

Output only. The pointers to DeployedModels created from this Model. Note that Model could have been deployed to Endpoints in different Locations.

repeated .google.cloud.aiplatform.v1.DeployedModelRef deployed_models = 15 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameter
NameDescription
indexint
Returns
TypeDescription
DeployedModelRefOrBuilder

getDeployedModelsOrBuilderList()

public abstract List<? extends DeployedModelRefOrBuilder> getDeployedModelsOrBuilderList()

Output only. The pointers to DeployedModels created from this Model. Note that Model could have been deployed to Endpoints in different Locations.

repeated .google.cloud.aiplatform.v1.DeployedModelRef deployed_models = 15 [(.google.api.field_behavior) = OUTPUT_ONLY];

Returns
TypeDescription
List<? extends com.google.cloud.aiplatform.v1.DeployedModelRefOrBuilder>

getDescription()

public abstract String getDescription()

The description of the Model.

string description = 3;

Returns
TypeDescription
String

The description.

getDescriptionBytes()

public abstract ByteString getDescriptionBytes()

The description of the Model.

string description = 3;

Returns
TypeDescription
ByteString

The bytes for description.

getDisplayName()

public abstract String getDisplayName()

Required. The display name of the Model. The name can be up to 128 characters long and can be consist of any UTF-8 characters.

string display_name = 2 [(.google.api.field_behavior) = REQUIRED];

Returns
TypeDescription
String

The displayName.

getDisplayNameBytes()

public abstract ByteString getDisplayNameBytes()

Required. The display name of the Model. The name can be up to 128 characters long and can be consist of any UTF-8 characters.

string display_name = 2 [(.google.api.field_behavior) = REQUIRED];

Returns
TypeDescription
ByteString

The bytes for displayName.

getEncryptionSpec()

public abstract EncryptionSpec getEncryptionSpec()

Customer-managed encryption key spec for a Model. If set, this Model and all sub-resources of this Model will be secured by this key.

.google.cloud.aiplatform.v1.EncryptionSpec encryption_spec = 24;

Returns
TypeDescription
EncryptionSpec

The encryptionSpec.

getEncryptionSpecOrBuilder()

public abstract EncryptionSpecOrBuilder getEncryptionSpecOrBuilder()

Customer-managed encryption key spec for a Model. If set, this Model and all sub-resources of this Model will be secured by this key.

.google.cloud.aiplatform.v1.EncryptionSpec encryption_spec = 24;

Returns
TypeDescription
EncryptionSpecOrBuilder

getEtag()

public abstract String getEtag()

Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

string etag = 16;

Returns
TypeDescription
String

The etag.

getEtagBytes()

public abstract ByteString getEtagBytes()

Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

string etag = 16;

Returns
TypeDescription
ByteString

The bytes for etag.

getExplanationSpec()

public abstract ExplanationSpec getExplanationSpec()

The default explanation specification for this Model. The Model can be used for requesting explanation after being deployed if it is populated. The Model can be used for batch explanation if it is populated. All fields of the explanation_spec can be overridden by explanation_spec of DeployModelRequest.deployed_model, or explanation_spec of BatchPredictionJob. If the default explanation specification is not set for this Model, this Model can still be used for requesting explanation by setting explanation_spec of DeployModelRequest.deployed_model and for batch explanation by setting explanation_spec of BatchPredictionJob.

.google.cloud.aiplatform.v1.ExplanationSpec explanation_spec = 23;

Returns
TypeDescription
ExplanationSpec

The explanationSpec.

getExplanationSpecOrBuilder()

public abstract ExplanationSpecOrBuilder getExplanationSpecOrBuilder()

The default explanation specification for this Model. The Model can be used for requesting explanation after being deployed if it is populated. The Model can be used for batch explanation if it is populated. All fields of the explanation_spec can be overridden by explanation_spec of DeployModelRequest.deployed_model, or explanation_spec of BatchPredictionJob. If the default explanation specification is not set for this Model, this Model can still be used for requesting explanation by setting explanation_spec of DeployModelRequest.deployed_model and for batch explanation by setting explanation_spec of BatchPredictionJob.

.google.cloud.aiplatform.v1.ExplanationSpec explanation_spec = 23;

Returns
TypeDescription
ExplanationSpecOrBuilder

getLabels()

public abstract Map<String,String> getLabels()

Use #getLabelsMap() instead.

Returns
TypeDescription
Map<String,String>

getLabelsCount()

public abstract int getLabelsCount()

The labels with user-defined metadata to organize your Models. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.

map<string, string> labels = 17;

Returns
TypeDescription
int

getLabelsMap()

public abstract Map<String,String> getLabelsMap()

The labels with user-defined metadata to organize your Models. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.

map<string, string> labels = 17;

Returns
TypeDescription
Map<String,String>

getLabelsOrDefault(String key, String defaultValue)

public abstract String getLabelsOrDefault(String key, String defaultValue)

The labels with user-defined metadata to organize your Models. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.

map<string, string> labels = 17;

Parameters
NameDescription
keyString
defaultValueString
Returns
TypeDescription
String

getLabelsOrThrow(String key)

public abstract String getLabelsOrThrow(String key)

The labels with user-defined metadata to organize your Models. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.

map<string, string> labels = 17;

Parameter
NameDescription
keyString
Returns
TypeDescription
String

getMetadata()

public abstract Value getMetadata()

Immutable. An additional information about the Model; the schema of the metadata can be found in metadata_schema. Unset if the Model does not have any additional information.

.google.protobuf.Value metadata = 6 [(.google.api.field_behavior) = IMMUTABLE];

Returns
TypeDescription
Value

The metadata.

getMetadataOrBuilder()

public abstract ValueOrBuilder getMetadataOrBuilder()

Immutable. An additional information about the Model; the schema of the metadata can be found in metadata_schema. Unset if the Model does not have any additional information.

.google.protobuf.Value metadata = 6 [(.google.api.field_behavior) = IMMUTABLE];

Returns
TypeDescription
ValueOrBuilder

getMetadataSchemaUri()

public abstract String getMetadataSchemaUri()

Immutable. Points to a YAML file stored on Google Cloud Storage describing additional information about the Model, that is specific to it. Unset if the Model does not have any additional information. The schema is defined as an OpenAPI 3.0.2 Schema Object. AutoML Models always have this field populated by Vertex AI, if no additional metadata is needed, this field is set to an empty string. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.

string metadata_schema_uri = 5 [(.google.api.field_behavior) = IMMUTABLE];

Returns
TypeDescription
String

The metadataSchemaUri.

getMetadataSchemaUriBytes()

public abstract ByteString getMetadataSchemaUriBytes()

Immutable. Points to a YAML file stored on Google Cloud Storage describing additional information about the Model, that is specific to it. Unset if the Model does not have any additional information. The schema is defined as an OpenAPI 3.0.2 Schema Object. AutoML Models always have this field populated by Vertex AI, if no additional metadata is needed, this field is set to an empty string. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.

string metadata_schema_uri = 5 [(.google.api.field_behavior) = IMMUTABLE];

Returns
TypeDescription
ByteString

The bytes for metadataSchemaUri.

getName()

public abstract String getName()

The resource name of the Model.

string name = 1;

Returns
TypeDescription
String

The name.

getNameBytes()

public abstract ByteString getNameBytes()

The resource name of the Model.

string name = 1;

Returns
TypeDescription
ByteString

The bytes for name.

getPredictSchemata()

public abstract PredictSchemata getPredictSchemata()

The schemata that describe formats of the Model's predictions and explanations as given and returned via PredictionService.Predict and PredictionService.Explain.

.google.cloud.aiplatform.v1.PredictSchemata predict_schemata = 4;

Returns
TypeDescription
PredictSchemata

The predictSchemata.

getPredictSchemataOrBuilder()

public abstract PredictSchemataOrBuilder getPredictSchemataOrBuilder()

The schemata that describe formats of the Model's predictions and explanations as given and returned via PredictionService.Predict and PredictionService.Explain.

.google.cloud.aiplatform.v1.PredictSchemata predict_schemata = 4;

Returns
TypeDescription
PredictSchemataOrBuilder

getSupportedDeploymentResourcesTypes(int index)

public abstract Model.DeploymentResourcesType getSupportedDeploymentResourcesTypes(int index)

Output only. When this Model is deployed, its prediction resources are described by the prediction_resources field of the Endpoint.deployed_models object. Because not all Models support all resource configuration types, the configuration types this Model supports are listed here. If no configuration types are listed, the Model cannot be deployed to an Endpoint and does not support online predictions (PredictionService.Predict or PredictionService.Explain). Such a Model can serve predictions by using a BatchPredictionJob, if it has at least one entry each in supported_input_storage_formats and supported_output_storage_formats.

repeated .google.cloud.aiplatform.v1.Model.DeploymentResourcesType supported_deployment_resources_types = 10 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameter
NameDescription
indexint

The index of the element to return.

Returns
TypeDescription
Model.DeploymentResourcesType

The supportedDeploymentResourcesTypes at the given index.

getSupportedDeploymentResourcesTypesCount()

public abstract int getSupportedDeploymentResourcesTypesCount()

Output only. When this Model is deployed, its prediction resources are described by the prediction_resources field of the Endpoint.deployed_models object. Because not all Models support all resource configuration types, the configuration types this Model supports are listed here. If no configuration types are listed, the Model cannot be deployed to an Endpoint and does not support online predictions (PredictionService.Predict or PredictionService.Explain). Such a Model can serve predictions by using a BatchPredictionJob, if it has at least one entry each in supported_input_storage_formats and supported_output_storage_formats.

repeated .google.cloud.aiplatform.v1.Model.DeploymentResourcesType supported_deployment_resources_types = 10 [(.google.api.field_behavior) = OUTPUT_ONLY];

Returns
TypeDescription
int

The count of supportedDeploymentResourcesTypes.

getSupportedDeploymentResourcesTypesList()

public abstract List<Model.DeploymentResourcesType> getSupportedDeploymentResourcesTypesList()

Output only. When this Model is deployed, its prediction resources are described by the prediction_resources field of the Endpoint.deployed_models object. Because not all Models support all resource configuration types, the configuration types this Model supports are listed here. If no configuration types are listed, the Model cannot be deployed to an Endpoint and does not support online predictions (PredictionService.Predict or PredictionService.Explain). Such a Model can serve predictions by using a BatchPredictionJob, if it has at least one entry each in supported_input_storage_formats and supported_output_storage_formats.

repeated .google.cloud.aiplatform.v1.Model.DeploymentResourcesType supported_deployment_resources_types = 10 [(.google.api.field_behavior) = OUTPUT_ONLY];

Returns
TypeDescription
List<DeploymentResourcesType>

A list containing the supportedDeploymentResourcesTypes.

getSupportedDeploymentResourcesTypesValue(int index)

public abstract int getSupportedDeploymentResourcesTypesValue(int index)

Output only. When this Model is deployed, its prediction resources are described by the prediction_resources field of the Endpoint.deployed_models object. Because not all Models support all resource configuration types, the configuration types this Model supports are listed here. If no configuration types are listed, the Model cannot be deployed to an Endpoint and does not support online predictions (PredictionService.Predict or PredictionService.Explain). Such a Model can serve predictions by using a BatchPredictionJob, if it has at least one entry each in supported_input_storage_formats and supported_output_storage_formats.

repeated .google.cloud.aiplatform.v1.Model.DeploymentResourcesType supported_deployment_resources_types = 10 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameter
NameDescription
indexint

The index of the value to return.

Returns
TypeDescription
int

The enum numeric value on the wire of supportedDeploymentResourcesTypes at the given index.

getSupportedDeploymentResourcesTypesValueList()

public abstract List<Integer> getSupportedDeploymentResourcesTypesValueList()

Output only. When this Model is deployed, its prediction resources are described by the prediction_resources field of the Endpoint.deployed_models object. Because not all Models support all resource configuration types, the configuration types this Model supports are listed here. If no configuration types are listed, the Model cannot be deployed to an Endpoint and does not support online predictions (PredictionService.Predict or PredictionService.Explain). Such a Model can serve predictions by using a BatchPredictionJob, if it has at least one entry each in supported_input_storage_formats and supported_output_storage_formats.

repeated .google.cloud.aiplatform.v1.Model.DeploymentResourcesType supported_deployment_resources_types = 10 [(.google.api.field_behavior) = OUTPUT_ONLY];

Returns
TypeDescription
List<Integer>

A list containing the enum numeric values on the wire for supportedDeploymentResourcesTypes.

getSupportedExportFormats(int index)

public abstract Model.ExportFormat getSupportedExportFormats(int index)

Output only. The formats in which this Model may be exported. If empty, this Model is not available for export.

repeated .google.cloud.aiplatform.v1.Model.ExportFormat supported_export_formats = 20 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameter
NameDescription
indexint
Returns
TypeDescription
Model.ExportFormat

getSupportedExportFormatsCount()

public abstract int getSupportedExportFormatsCount()

Output only. The formats in which this Model may be exported. If empty, this Model is not available for export.

repeated .google.cloud.aiplatform.v1.Model.ExportFormat supported_export_formats = 20 [(.google.api.field_behavior) = OUTPUT_ONLY];

Returns
TypeDescription
int

getSupportedExportFormatsList()

public abstract List<Model.ExportFormat> getSupportedExportFormatsList()

Output only. The formats in which this Model may be exported. If empty, this Model is not available for export.

repeated .google.cloud.aiplatform.v1.Model.ExportFormat supported_export_formats = 20 [(.google.api.field_behavior) = OUTPUT_ONLY];

Returns
TypeDescription
List<ExportFormat>

getSupportedExportFormatsOrBuilder(int index)

public abstract Model.ExportFormatOrBuilder getSupportedExportFormatsOrBuilder(int index)

Output only. The formats in which this Model may be exported. If empty, this Model is not available for export.

repeated .google.cloud.aiplatform.v1.Model.ExportFormat supported_export_formats = 20 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameter
NameDescription
indexint
Returns
TypeDescription
Model.ExportFormatOrBuilder

getSupportedExportFormatsOrBuilderList()

public abstract List<? extends Model.ExportFormatOrBuilder> getSupportedExportFormatsOrBuilderList()

Output only. The formats in which this Model may be exported. If empty, this Model is not available for export.

repeated .google.cloud.aiplatform.v1.Model.ExportFormat supported_export_formats = 20 [(.google.api.field_behavior) = OUTPUT_ONLY];

Returns
TypeDescription
List<? extends com.google.cloud.aiplatform.v1.Model.ExportFormatOrBuilder>

getSupportedInputStorageFormats(int index)

public abstract String getSupportedInputStorageFormats(int index)

Output only. The formats this Model supports in BatchPredictionJob.input_config. If PredictSchemata.instance_schema_uri exists, the instances should be given as per that schema. The possible formats are:

  • jsonl The JSON Lines format, where each instance is a single line. Uses GcsSource.
  • csv The CSV format, where each instance is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. Uses GcsSource.
  • tf-record The TFRecord format, where each instance is a single record in tfrecord syntax. Uses GcsSource.
  • tf-record-gzip Similar to tf-record, but the file is gzipped. Uses GcsSource.
  • bigquery Each instance is a single row in BigQuery. Uses BigQuerySource.
  • file-list Each line of the file is the location of an instance to process, uses gcs_source field of the InputConfig object. If this Model doesn't support any of these formats it means it cannot be used with a BatchPredictionJob. However, if it has supported_deployment_resources_types, it could serve online predictions by using PredictionService.Predict or PredictionService.Explain.

repeated string supported_input_storage_formats = 11 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameter
NameDescription
indexint

The index of the element to return.

Returns
TypeDescription
String

The supportedInputStorageFormats at the given index.

getSupportedInputStorageFormatsBytes(int index)

public abstract ByteString getSupportedInputStorageFormatsBytes(int index)

Output only. The formats this Model supports in BatchPredictionJob.input_config. If PredictSchemata.instance_schema_uri exists, the instances should be given as per that schema. The possible formats are:

  • jsonl The JSON Lines format, where each instance is a single line. Uses GcsSource.
  • csv The CSV format, where each instance is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. Uses GcsSource.
  • tf-record The TFRecord format, where each instance is a single record in tfrecord syntax. Uses GcsSource.
  • tf-record-gzip Similar to tf-record, but the file is gzipped. Uses GcsSource.
  • bigquery Each instance is a single row in BigQuery. Uses BigQuerySource.
  • file-list Each line of the file is the location of an instance to process, uses gcs_source field of the InputConfig object. If this Model doesn't support any of these formats it means it cannot be used with a BatchPredictionJob. However, if it has supported_deployment_resources_types, it could serve online predictions by using PredictionService.Predict or PredictionService.Explain.

repeated string supported_input_storage_formats = 11 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameter
NameDescription
indexint

The index of the value to return.

Returns
TypeDescription
ByteString

The bytes of the supportedInputStorageFormats at the given index.

getSupportedInputStorageFormatsCount()

public abstract int getSupportedInputStorageFormatsCount()

Output only. The formats this Model supports in BatchPredictionJob.input_config. If PredictSchemata.instance_schema_uri exists, the instances should be given as per that schema. The possible formats are:

  • jsonl The JSON Lines format, where each instance is a single line. Uses GcsSource.
  • csv The CSV format, where each instance is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. Uses GcsSource.
  • tf-record The TFRecord format, where each instance is a single record in tfrecord syntax. Uses GcsSource.
  • tf-record-gzip Similar to tf-record, but the file is gzipped. Uses GcsSource.
  • bigquery Each instance is a single row in BigQuery. Uses BigQuerySource.
  • file-list Each line of the file is the location of an instance to process, uses gcs_source field of the InputConfig object. If this Model doesn't support any of these formats it means it cannot be used with a BatchPredictionJob. However, if it has supported_deployment_resources_types, it could serve online predictions by using PredictionService.Predict or PredictionService.Explain.

repeated string supported_input_storage_formats = 11 [(.google.api.field_behavior) = OUTPUT_ONLY];

Returns
TypeDescription
int

The count of supportedInputStorageFormats.

getSupportedInputStorageFormatsList()

public abstract List<String> getSupportedInputStorageFormatsList()

Output only. The formats this Model supports in BatchPredictionJob.input_config. If PredictSchemata.instance_schema_uri exists, the instances should be given as per that schema. The possible formats are:

  • jsonl The JSON Lines format, where each instance is a single line. Uses GcsSource.
  • csv The CSV format, where each instance is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. Uses GcsSource.
  • tf-record The TFRecord format, where each instance is a single record in tfrecord syntax. Uses GcsSource.
  • tf-record-gzip Similar to tf-record, but the file is gzipped. Uses GcsSource.
  • bigquery Each instance is a single row in BigQuery. Uses BigQuerySource.
  • file-list Each line of the file is the location of an instance to process, uses gcs_source field of the InputConfig object. If this Model doesn't support any of these formats it means it cannot be used with a BatchPredictionJob. However, if it has supported_deployment_resources_types, it could serve online predictions by using PredictionService.Predict or PredictionService.Explain.

repeated string supported_input_storage_formats = 11 [(.google.api.field_behavior) = OUTPUT_ONLY];

Returns
TypeDescription
List<String>

A list containing the supportedInputStorageFormats.

getSupportedOutputStorageFormats(int index)

public abstract String getSupportedOutputStorageFormats(int index)

Output only. The formats this Model supports in BatchPredictionJob.output_config. If both PredictSchemata.instance_schema_uri and PredictSchemata.prediction_schema_uri exist, the predictions are returned together with their instances. In other words, the prediction has the original instance data first, followed by the actual prediction content (as per the schema). The possible formats are:

  • jsonl The JSON Lines format, where each prediction is a single line. Uses GcsDestination.
  • csv The CSV format, where each prediction is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. Uses GcsDestination.
  • bigquery Each prediction is a single row in a BigQuery table, uses BigQueryDestination . If this Model doesn't support any of these formats it means it cannot be used with a BatchPredictionJob. However, if it has supported_deployment_resources_types, it could serve online predictions by using PredictionService.Predict or PredictionService.Explain.

repeated string supported_output_storage_formats = 12 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameter
NameDescription
indexint

The index of the element to return.

Returns
TypeDescription
String

The supportedOutputStorageFormats at the given index.

getSupportedOutputStorageFormatsBytes(int index)

public abstract ByteString getSupportedOutputStorageFormatsBytes(int index)

Output only. The formats this Model supports in BatchPredictionJob.output_config. If both PredictSchemata.instance_schema_uri and PredictSchemata.prediction_schema_uri exist, the predictions are returned together with their instances. In other words, the prediction has the original instance data first, followed by the actual prediction content (as per the schema). The possible formats are:

  • jsonl The JSON Lines format, where each prediction is a single line. Uses GcsDestination.
  • csv The CSV format, where each prediction is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. Uses GcsDestination.
  • bigquery Each prediction is a single row in a BigQuery table, uses BigQueryDestination . If this Model doesn't support any of these formats it means it cannot be used with a BatchPredictionJob. However, if it has supported_deployment_resources_types, it could serve online predictions by using PredictionService.Predict or PredictionService.Explain.

repeated string supported_output_storage_formats = 12 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameter
NameDescription
indexint

The index of the value to return.

Returns
TypeDescription
ByteString

The bytes of the supportedOutputStorageFormats at the given index.

getSupportedOutputStorageFormatsCount()

public abstract int getSupportedOutputStorageFormatsCount()

Output only. The formats this Model supports in BatchPredictionJob.output_config. If both PredictSchemata.instance_schema_uri and PredictSchemata.prediction_schema_uri exist, the predictions are returned together with their instances. In other words, the prediction has the original instance data first, followed by the actual prediction content (as per the schema). The possible formats are:

  • jsonl The JSON Lines format, where each prediction is a single line. Uses GcsDestination.
  • csv The CSV format, where each prediction is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. Uses GcsDestination.
  • bigquery Each prediction is a single row in a BigQuery table, uses BigQueryDestination . If this Model doesn't support any of these formats it means it cannot be used with a BatchPredictionJob. However, if it has supported_deployment_resources_types, it could serve online predictions by using PredictionService.Predict or PredictionService.Explain.

repeated string supported_output_storage_formats = 12 [(.google.api.field_behavior) = OUTPUT_ONLY];

Returns
TypeDescription
int

The count of supportedOutputStorageFormats.

getSupportedOutputStorageFormatsList()

public abstract List<String> getSupportedOutputStorageFormatsList()

Output only. The formats this Model supports in BatchPredictionJob.output_config. If both PredictSchemata.instance_schema_uri and PredictSchemata.prediction_schema_uri exist, the predictions are returned together with their instances. In other words, the prediction has the original instance data first, followed by the actual prediction content (as per the schema). The possible formats are:

  • jsonl The JSON Lines format, where each prediction is a single line. Uses GcsDestination.
  • csv The CSV format, where each prediction is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. Uses GcsDestination.
  • bigquery Each prediction is a single row in a BigQuery table, uses BigQueryDestination . If this Model doesn't support any of these formats it means it cannot be used with a BatchPredictionJob. However, if it has supported_deployment_resources_types, it could serve online predictions by using PredictionService.Predict or PredictionService.Explain.

repeated string supported_output_storage_formats = 12 [(.google.api.field_behavior) = OUTPUT_ONLY];

Returns
TypeDescription
List<String>

A list containing the supportedOutputStorageFormats.

getTrainingPipeline()

public abstract String getTrainingPipeline()

Output only. The resource name of the TrainingPipeline that uploaded this Model, if any.

string training_pipeline = 7 [(.google.api.field_behavior) = OUTPUT_ONLY, (.google.api.resource_reference) = { ... }

Returns
TypeDescription
String

The trainingPipeline.

getTrainingPipelineBytes()

public abstract ByteString getTrainingPipelineBytes()

Output only. The resource name of the TrainingPipeline that uploaded this Model, if any.

string training_pipeline = 7 [(.google.api.field_behavior) = OUTPUT_ONLY, (.google.api.resource_reference) = { ... }

Returns
TypeDescription
ByteString

The bytes for trainingPipeline.

getUpdateTime()

public abstract Timestamp getUpdateTime()

Output only. Timestamp when this Model was most recently updated.

.google.protobuf.Timestamp update_time = 14 [(.google.api.field_behavior) = OUTPUT_ONLY];

Returns
TypeDescription
Timestamp

The updateTime.

getUpdateTimeOrBuilder()

public abstract TimestampOrBuilder getUpdateTimeOrBuilder()

Output only. Timestamp when this Model was most recently updated.

.google.protobuf.Timestamp update_time = 14 [(.google.api.field_behavior) = OUTPUT_ONLY];

Returns
TypeDescription
TimestampOrBuilder

hasContainerSpec()

public abstract boolean hasContainerSpec()

Input only. The specification of the container that is to be used when deploying this Model. The specification is ingested upon ModelService.UploadModel, and all binaries it contains are copied and stored internally by Vertex AI. Not present for AutoML Models.

.google.cloud.aiplatform.v1.ModelContainerSpec container_spec = 9 [(.google.api.field_behavior) = INPUT_ONLY];

Returns
TypeDescription
boolean

Whether the containerSpec field is set.

hasCreateTime()

public abstract boolean hasCreateTime()

Output only. Timestamp when this Model was uploaded into Vertex AI.

.google.protobuf.Timestamp create_time = 13 [(.google.api.field_behavior) = OUTPUT_ONLY];

Returns
TypeDescription
boolean

Whether the createTime field is set.

hasEncryptionSpec()

public abstract boolean hasEncryptionSpec()

Customer-managed encryption key spec for a Model. If set, this Model and all sub-resources of this Model will be secured by this key.

.google.cloud.aiplatform.v1.EncryptionSpec encryption_spec = 24;

Returns
TypeDescription
boolean

Whether the encryptionSpec field is set.

hasExplanationSpec()

public abstract boolean hasExplanationSpec()

The default explanation specification for this Model. The Model can be used for requesting explanation after being deployed if it is populated. The Model can be used for batch explanation if it is populated. All fields of the explanation_spec can be overridden by explanation_spec of DeployModelRequest.deployed_model, or explanation_spec of BatchPredictionJob. If the default explanation specification is not set for this Model, this Model can still be used for requesting explanation by setting explanation_spec of DeployModelRequest.deployed_model and for batch explanation by setting explanation_spec of BatchPredictionJob.

.google.cloud.aiplatform.v1.ExplanationSpec explanation_spec = 23;

Returns
TypeDescription
boolean

Whether the explanationSpec field is set.

hasMetadata()

public abstract boolean hasMetadata()

Immutable. An additional information about the Model; the schema of the metadata can be found in metadata_schema. Unset if the Model does not have any additional information.

.google.protobuf.Value metadata = 6 [(.google.api.field_behavior) = IMMUTABLE];

Returns
TypeDescription
boolean

Whether the metadata field is set.

hasPredictSchemata()

public abstract boolean hasPredictSchemata()

The schemata that describe formats of the Model's predictions and explanations as given and returned via PredictionService.Predict and PredictionService.Explain.

.google.cloud.aiplatform.v1.PredictSchemata predict_schemata = 4;

Returns
TypeDescription
boolean

Whether the predictSchemata field is set.

hasUpdateTime()

public abstract boolean hasUpdateTime()

Output only. Timestamp when this Model was most recently updated.

.google.protobuf.Timestamp update_time = 14 [(.google.api.field_behavior) = OUTPUT_ONLY];

Returns
TypeDescription
boolean

Whether the updateTime field is set.