REST Resource: projects.locations.models

Resource: Model

A trained machine learning Model.

JSON representation
{
  "name": string,
  "versionId": string,
  "versionAliases": [
    string
  ],
  "versionCreateTime": string,
  "versionUpdateTime": string,
  "displayName": string,
  "description": string,
  "versionDescription": string,
  "predictSchemata": {
    object (PredictSchemata)
  },
  "metadataSchemaUri": string,
  "metadata": value,
  "supportedExportFormats": [
    {
      object (ExportFormat)
    }
  ],
  "trainingPipeline": string,
  "containerSpec": {
    object (ModelContainerSpec)
  },
  "artifactUri": string,
  "supportedDeploymentResourcesTypes": [
    enum (DeploymentResourcesType)
  ],
  "supportedInputStorageFormats": [
    string
  ],
  "supportedOutputStorageFormats": [
    string
  ],
  "createTime": string,
  "updateTime": string,
  "deployedModels": [
    {
      object (DeployedModelRef)
    }
  ],
  "explanationSpec": {
    object (ExplanationSpec)
  },
  "etag": string,
  "labels": {
    string: string,
    ...
  },
  "encryptionSpec": {
    object (EncryptionSpec)
  }
}
Fields
name

string

The resource name of the Model.

versionId

string

Output only. Immutable. The version ID of the model. A new version is committed when a new model version is uploaded or trained under an existing model id. It is an auto-incrementing decimal number in string representation.

versionAliases[]

string

User provided version aliases so that a model version can be referenced via alias (i.e. projects/{project}/locations/{location}/models/{modelId}@{version_alias} instead of auto-generated version id (i.e. projects/{project}/locations/{location}/models/{modelId}@{versionId}). The format is [a-z][a-zA-Z0-9-]{0,126}[a-z0-9] to distinguish from versionId. A default version alias will be created for the first version of the model, and there must be exactly one default version alias for a model.

versionCreateTime

string (Timestamp format)

Output only. Timestamp when this version was created.

A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z".

versionUpdateTime

string (Timestamp format)

Output only. Timestamp when this version was most recently updated.

A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z".

displayName

string

Required. The display name of the Model. The name can be up to 128 characters long and can be consist of any UTF-8 characters.

description

string

The description of the Model.

versionDescription

string

The description of this version.

predictSchemata

object (PredictSchemata)

The schemata that describe formats of the Model's predictions and explanations as given and returned via PredictionService.Predict and PredictionService.Explain.

metadataSchemaUri

string

Immutable. Points to a YAML file stored on Google Cloud Storage describing additional information about the Model, that is specific to it. Unset if the Model does not have any additional information. The schema is defined as an OpenAPI 3.0.2 Schema Object. AutoML Models always have this field populated by Vertex AI, if no additional metadata is needed, this field is set to an empty string. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.

metadata

value (Value format)

Immutable. An additional information about the Model; the schema of the metadata can be found in metadataSchema. Unset if the Model does not have any additional information.

supportedExportFormats[]

object (ExportFormat)

Output only. The formats in which this Model may be exported. If empty, this Model is not available for export.

trainingPipeline

string

Output only. The resource name of the TrainingPipeline that uploaded this Model, if any.

containerSpec

object (ModelContainerSpec)

Input only. The specification of the container that is to be used when deploying this Model. The specification is ingested upon ModelService.UploadModel, and all binaries it contains are copied and stored internally by Vertex AI. Not present for AutoML Models.

artifactUri

string

Immutable. The path to the directory containing the Model artifact and any of its supporting files. Not present for AutoML Models.

supportedDeploymentResourcesTypes[]

enum (DeploymentResourcesType)

Output only. When this Model is deployed, its prediction resources are described by the prediction_resources field of the Endpoint.deployed_models object. Because not all Models support all resource configuration types, the configuration types this Model supports are listed here. If no configuration types are listed, the Model cannot be deployed to an Endpoint and does not support online predictions (PredictionService.Predict or PredictionService.Explain). Such a Model can serve predictions by using a BatchPredictionJob, if it has at least one entry each in supportedInputStorageFormats and supportedOutputStorageFormats.

supportedInputStorageFormats[]

string

Output only. The formats this Model supports in BatchPredictionJob.input_config. If PredictSchemata.instance_schema_uri exists, the instances should be given as per that schema.

The possible formats are:

  • jsonl The JSON Lines format, where each instance is a single line. Uses GcsSource.

  • csv The CSV format, where each instance is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. Uses GcsSource.

  • tf-record The TFRecord format, where each instance is a single record in tfrecord syntax. Uses GcsSource.

  • tf-record-gzip Similar to tf-record, but the file is gzipped. Uses GcsSource.

  • bigquery Each instance is a single row in BigQuery. Uses BigQuerySource.

  • file-list Each line of the file is the location of an instance to process, uses gcsSource field of the InputConfig object.

If this Model doesn't support any of these formats it means it cannot be used with a BatchPredictionJob. However, if it has supportedDeploymentResourcesTypes, it could serve online predictions by using PredictionService.Predict or PredictionService.Explain.

supportedOutputStorageFormats[]

string

Output only. The formats this Model supports in BatchPredictionJob.output_config. If both PredictSchemata.instance_schema_uri and PredictSchemata.prediction_schema_uri exist, the predictions are returned together with their instances. In other words, the prediction has the original instance data first, followed by the actual prediction content (as per the schema).

The possible formats are:

  • jsonl The JSON Lines format, where each prediction is a single line. Uses GcsDestination.

  • csv The CSV format, where each prediction is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. Uses GcsDestination.

  • bigquery Each prediction is a single row in a BigQuery table, uses BigQueryDestination .

If this Model doesn't support any of these formats it means it cannot be used with a BatchPredictionJob. However, if it has supportedDeploymentResourcesTypes, it could serve online predictions by using PredictionService.Predict or PredictionService.Explain.

createTime

string (Timestamp format)

Output only. Timestamp when this Model was uploaded into Vertex AI.

A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z".

updateTime

string (Timestamp format)

Output only. Timestamp when this Model was most recently updated.

A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z".

deployedModels[]

object (DeployedModelRef)

Output only. The pointers to DeployedModels created from this Model. Note that Model could have been deployed to Endpoints in different Locations.

explanationSpec

object (ExplanationSpec)

The default explanation specification for this Model.

The Model can be used for [requesting explanation][PredictionService.Explain] after being deployed if it is populated. The Model can be used for [batch explanation][BatchPredictionJob.generate_explanation] if it is populated.

All fields of the explanationSpec can be overridden by explanationSpec of DeployModelRequest.deployed_model, or explanationSpec of BatchPredictionJob.

If the default explanation specification is not set for this Model, this Model can still be used for [requesting explanation][PredictionService.Explain] by setting explanationSpec of DeployModelRequest.deployed_model and for [batch explanation][BatchPredictionJob.generate_explanation] by setting explanationSpec of BatchPredictionJob.

etag

string

Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

labels

map (key: string, value: string)

The labels with user-defined metadata to organize your Models.

Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.

See https://goo.gl/xmQnxf for more information and examples of labels.

encryptionSpec

object (EncryptionSpec)

Customer-managed encryption key spec for a Model. If set, this Model and all sub-resources of this Model will be secured by this key.

ExportFormat

Represents export format supported by the Model. All formats export to Google Cloud Storage.

JSON representation
{
  "id": string,
  "exportableContents": [
    enum (ExportableContent)
  ]
}
Fields
id

string

Output only. The ID of the export format. The possible format IDs are:

  • tflite Used for Android mobile devices.

  • edgetpu-tflite Used for Edge TPU devices.

  • tf-saved-model A tensorflow model in SavedModel format.

  • tf-js A TensorFlow.js model that can be used in the browser and in Node.js using JavaScript.

  • core-ml Used for iOS mobile devices.

  • custom-trained A Model that was uploaded or trained by custom code.

exportableContents[]

enum (ExportableContent)

Output only. The content of this Model that may be exported.

ExportableContent

The Model content that can be exported.

Enums
EXPORTABLE_CONTENT_UNSPECIFIED Should not be used.
ARTIFACT Model artifact and any of its supported files. Will be exported to the location specified by the artifactDestination field of the ExportModelRequest.output_config object.
IMAGE The container image that is to be used when deploying this Model. Will be exported to the location specified by the imageDestination field of the ExportModelRequest.output_config object.

DeploymentResourcesType

Identifies a type of Model's prediction resources.

Enums
DEPLOYMENT_RESOURCES_TYPE_UNSPECIFIED Should not be used.
DEDICATED_RESOURCES Resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration.
AUTOMATIC_RESOURCES Resources that to large degree are decided by Vertex AI, and require only a modest additional configuration.

DeployedModelRef

Points to a DeployedModel.

JSON representation
{
  "endpoint": string,
  "deployedModelId": string
}
Fields
endpoint

string

Immutable. A resource name of an Endpoint.

deployedModelId

string

Immutable. An ID of a DeployedModel in the above Endpoint.

Methods

delete

Deletes a Model.

deleteVersion

Deletes a Model version.

export

Exports a trained, exportable Model to a location specified by the user.

get

Gets a Model.

list

Lists Models in a Location.

listVersions

Lists versions of the specified model.

mergeVersionAliases

Merges a set of aliases for a Model version.

patch

Updates a Model.

upload

Uploads a Model artifact into Vertex AI.