Google Cloud Ai Platform V1 Client - Class Model (0.31.0)

Reference documentation and code samples for the Google Cloud Ai Platform V1 Client class Model.

A trained machine learning Model.

Generated from protobuf message google.cloud.aiplatform.v1.Model

Namespace

Google \ Cloud \ AIPlatform \ V1

Methods

__construct

Constructor.

Parameters
NameDescription
data array

Optional. Data for populating the Message object.

↳ name string

The resource name of the Model.

↳ version_id string

Output only. Immutable. The version ID of the model. A new version is committed when a new model version is uploaded or trained under an existing model id. It is an auto-incrementing decimal number in string representation.

↳ version_aliases array

User provided version aliases so that a model version can be referenced via alias (i.e. projects/{project}/locations/{location}/models/{model_id}@{version_alias} instead of auto-generated version id (i.e. projects/{project}/locations/{location}/models/{model_id}@{version_id}). The format is [a-z][a-zA-Z0-9-]{0,126}[a-z0-9] to distinguish from version_id. A default version alias will be created for the first version of the model, and there must be exactly one default version alias for a model.

↳ version_create_time Google\Protobuf\Timestamp

Output only. Timestamp when this version was created.

↳ version_update_time Google\Protobuf\Timestamp

Output only. Timestamp when this version was most recently updated.

↳ display_name string

Required. The display name of the Model. The name can be up to 128 characters long and can consist of any UTF-8 characters.

↳ description string

The description of the Model.

↳ version_description string

The description of this version.

↳ predict_schemata Google\Cloud\AIPlatform\V1\PredictSchemata

The schemata that describe formats of the Model's predictions and explanations as given and returned via PredictionService.Predict and PredictionService.Explain.

↳ metadata_schema_uri string

Immutable. Points to a YAML file stored on Google Cloud Storage describing additional information about the Model, that is specific to it. Unset if the Model does not have any additional information. The schema is defined as an OpenAPI 3.0.2 Schema Object. AutoML Models always have this field populated by Vertex AI, if no additional metadata is needed, this field is set to an empty string. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.

↳ metadata Google\Protobuf\Value

Immutable. An additional information about the Model; the schema of the metadata can be found in metadata_schema. Unset if the Model does not have any additional information.

↳ supported_export_formats array<Google\Cloud\AIPlatform\V1\Model\ExportFormat>

Output only. The formats in which this Model may be exported. If empty, this Model is not available for export.

↳ training_pipeline string

Output only. The resource name of the TrainingPipeline that uploaded this Model, if any.

↳ pipeline_job string

Optional. This field is populated if the model is produced by a pipeline job.

↳ container_spec Google\Cloud\AIPlatform\V1\ModelContainerSpec

Input only. The specification of the container that is to be used when deploying this Model. The specification is ingested upon ModelService.UploadModel, and all binaries it contains are copied and stored internally by Vertex AI. Not present for AutoML Models or Large Models.

↳ artifact_uri string

Immutable. The path to the directory containing the Model artifact and any of its supporting files. Not present for AutoML Models or Large Models.

↳ supported_deployment_resources_types array

Output only. When this Model is deployed, its prediction resources are described by the prediction_resources field of the Endpoint.deployed_models object. Because not all Models support all resource configuration types, the configuration types this Model supports are listed here. If no configuration types are listed, the Model cannot be deployed to an Endpoint and does not support online predictions (PredictionService.Predict or PredictionService.Explain). Such a Model can serve predictions by using a BatchPredictionJob, if it has at least one entry each in supported_input_storage_formats and supported_output_storage_formats.

↳ supported_input_storage_formats array

Output only. The formats this Model supports in BatchPredictionJob.input_config. If PredictSchemata.instance_schema_uri exists, the instances should be given as per that schema. The possible formats are: * jsonl The JSON Lines format, where each instance is a single line. Uses GcsSource. * csv The CSV format, where each instance is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. Uses GcsSource. * tf-record The TFRecord format, where each instance is a single record in tfrecord syntax. Uses GcsSource. * tf-record-gzip Similar to tf-record, but the file is gzipped. Uses GcsSource. * bigquery Each instance is a single row in BigQuery. Uses BigQuerySource. * file-list Each line of the file is the location of an instance to process, uses gcs_source field of the InputConfig object. If this Model doesn't support any of these formats it means it cannot be used with a BatchPredictionJob. However, if it has supported_deployment_resources_types, it could serve online predictions by using PredictionService.Predict or PredictionService.Explain.

↳ supported_output_storage_formats array

Output only. The formats this Model supports in BatchPredictionJob.output_config. If both PredictSchemata.instance_schema_uri and PredictSchemata.prediction_schema_uri exist, the predictions are returned together with their instances. In other words, the prediction has the original instance data first, followed by the actual prediction content (as per the schema). The possible formats are: * jsonl The JSON Lines format, where each prediction is a single line. Uses GcsDestination. * csv The CSV format, where each prediction is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. Uses GcsDestination. * bigquery Each prediction is a single row in a BigQuery table, uses BigQueryDestination . If this Model doesn't support any of these formats it means it cannot be used with a BatchPredictionJob. However, if it has supported_deployment_resources_types, it could serve online predictions by using PredictionService.Predict or PredictionService.Explain.

↳ create_time Google\Protobuf\Timestamp

Output only. Timestamp when this Model was uploaded into Vertex AI.

↳ update_time Google\Protobuf\Timestamp

Output only. Timestamp when this Model was most recently updated.

↳ deployed_models array<Google\Cloud\AIPlatform\V1\DeployedModelRef>

Output only. The pointers to DeployedModels created from this Model. Note that Model could have been deployed to Endpoints in different Locations.

↳ explanation_spec Google\Cloud\AIPlatform\V1\ExplanationSpec

The default explanation specification for this Model. The Model can be used for requesting explanation after being deployed if it is populated. The Model can be used for batch explanation if it is populated. All fields of the explanation_spec can be overridden by explanation_spec of DeployModelRequest.deployed_model, or explanation_spec of BatchPredictionJob. If the default explanation specification is not set for this Model, this Model can still be used for requesting explanation by setting explanation_spec of DeployModelRequest.deployed_model and for batch explanation by setting explanation_spec of BatchPredictionJob.

↳ etag string

Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

↳ labels array|Google\Protobuf\Internal\MapField

The labels with user-defined metadata to organize your Models. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.

↳ data_stats Google\Cloud\AIPlatform\V1\Model\DataStats

Stats of data used for training or evaluating the Model. Only populated when the Model is trained by a TrainingPipeline with data_input_config.

↳ encryption_spec Google\Cloud\AIPlatform\V1\EncryptionSpec

Customer-managed encryption key spec for a Model. If set, this Model and all sub-resources of this Model will be secured by this key.

↳ model_source_info Google\Cloud\AIPlatform\V1\ModelSourceInfo

Output only. Source of a model. It can either be automl training pipeline, custom training pipeline, BigQuery ML, or existing Vertex AI Model.

↳ original_model_info Google\Cloud\AIPlatform\V1\Model\OriginalModelInfo

Output only. If this Model is a copy of another Model, this contains info about the original.

↳ metadata_artifact string

Output only. The resource name of the Artifact that was created in MetadataStore when creating the Model. The Artifact resource name pattern is projects/{project}/locations/{location}/metadataStores/{metadata_store}/artifacts/{artifact}.

getName

The resource name of the Model.

Returns
TypeDescription
string

setName

The resource name of the Model.

Parameter
NameDescription
var string
Returns
TypeDescription
$this

getVersionId

Output only. Immutable. The version ID of the model.

A new version is committed when a new model version is uploaded or trained under an existing model id. It is an auto-incrementing decimal number in string representation.

Returns
TypeDescription
string

setVersionId

Output only. Immutable. The version ID of the model.

A new version is committed when a new model version is uploaded or trained under an existing model id. It is an auto-incrementing decimal number in string representation.

Parameter
NameDescription
var string
Returns
TypeDescription
$this

getVersionAliases

User provided version aliases so that a model version can be referenced via alias (i.e.

projects/{project}/locations/{location}/models/{model_id}@{version_alias} instead of auto-generated version id (i.e. projects/{project}/locations/{location}/models/{model_id}@{version_id}). The format is [a-z][a-zA-Z0-9-]{0,126}[a-z0-9] to distinguish from version_id. A default version alias will be created for the first version of the model, and there must be exactly one default version alias for a model.

Returns
TypeDescription
Google\Protobuf\Internal\RepeatedField

setVersionAliases

User provided version aliases so that a model version can be referenced via alias (i.e.

projects/{project}/locations/{location}/models/{model_id}@{version_alias} instead of auto-generated version id (i.e. projects/{project}/locations/{location}/models/{model_id}@{version_id}). The format is [a-z][a-zA-Z0-9-]{0,126}[a-z0-9] to distinguish from version_id. A default version alias will be created for the first version of the model, and there must be exactly one default version alias for a model.

Parameter
NameDescription
var string[]
Returns
TypeDescription
$this

getVersionCreateTime

Output only. Timestamp when this version was created.

Returns
TypeDescription
Google\Protobuf\Timestamp|null

hasVersionCreateTime

clearVersionCreateTime

setVersionCreateTime

Output only. Timestamp when this version was created.

Parameter
NameDescription
var Google\Protobuf\Timestamp
Returns
TypeDescription
$this

getVersionUpdateTime

Output only. Timestamp when this version was most recently updated.

Returns
TypeDescription
Google\Protobuf\Timestamp|null

hasVersionUpdateTime

clearVersionUpdateTime

setVersionUpdateTime

Output only. Timestamp when this version was most recently updated.

Parameter
NameDescription
var Google\Protobuf\Timestamp
Returns
TypeDescription
$this

getDisplayName

Required. The display name of the Model.

The name can be up to 128 characters long and can consist of any UTF-8 characters.

Returns
TypeDescription
string

setDisplayName

Required. The display name of the Model.

The name can be up to 128 characters long and can consist of any UTF-8 characters.

Parameter
NameDescription
var string
Returns
TypeDescription
$this

getDescription

The description of the Model.

Returns
TypeDescription
string

setDescription

The description of the Model.

Parameter
NameDescription
var string
Returns
TypeDescription
$this

getVersionDescription

The description of this version.

Returns
TypeDescription
string

setVersionDescription

The description of this version.

Parameter
NameDescription
var string
Returns
TypeDescription
$this

getPredictSchemata

The schemata that describe formats of the Model's predictions and explanations as given and returned via PredictionService.Predict and PredictionService.Explain.

Returns
TypeDescription
Google\Cloud\AIPlatform\V1\PredictSchemata|null

hasPredictSchemata

clearPredictSchemata

setPredictSchemata

The schemata that describe formats of the Model's predictions and explanations as given and returned via PredictionService.Predict and PredictionService.Explain.

Parameter
NameDescription
var Google\Cloud\AIPlatform\V1\PredictSchemata
Returns
TypeDescription
$this

getMetadataSchemaUri

Immutable. Points to a YAML file stored on Google Cloud Storage describing additional information about the Model, that is specific to it. Unset if the Model does not have any additional information. The schema is defined as an OpenAPI 3.0.2 Schema Object.

AutoML Models always have this field populated by Vertex AI, if no additional metadata is needed, this field is set to an empty string. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.

Returns
TypeDescription
string

setMetadataSchemaUri

Immutable. Points to a YAML file stored on Google Cloud Storage describing additional information about the Model, that is specific to it. Unset if the Model does not have any additional information. The schema is defined as an OpenAPI 3.0.2 Schema Object.

AutoML Models always have this field populated by Vertex AI, if no additional metadata is needed, this field is set to an empty string. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.

Parameter
NameDescription
var string
Returns
TypeDescription
$this

getMetadata

Immutable. An additional information about the Model; the schema of the metadata can be found in metadata_schema.

Unset if the Model does not have any additional information.

Returns
TypeDescription
Google\Protobuf\Value|null

hasMetadata

clearMetadata

setMetadata

Immutable. An additional information about the Model; the schema of the metadata can be found in metadata_schema.

Unset if the Model does not have any additional information.

Parameter
NameDescription
var Google\Protobuf\Value
Returns
TypeDescription
$this

getSupportedExportFormats

Output only. The formats in which this Model may be exported. If empty, this Model is not available for export.

Returns
TypeDescription
Google\Protobuf\Internal\RepeatedField

setSupportedExportFormats

Output only. The formats in which this Model may be exported. If empty, this Model is not available for export.

Parameter
NameDescription
var array<Google\Cloud\AIPlatform\V1\Model\ExportFormat>
Returns
TypeDescription
$this

getTrainingPipeline

Output only. The resource name of the TrainingPipeline that uploaded this Model, if any.

Returns
TypeDescription
string

setTrainingPipeline

Output only. The resource name of the TrainingPipeline that uploaded this Model, if any.

Parameter
NameDescription
var string
Returns
TypeDescription
$this

getPipelineJob

Optional. This field is populated if the model is produced by a pipeline job.

Returns
TypeDescription
string

setPipelineJob

Optional. This field is populated if the model is produced by a pipeline job.

Parameter
NameDescription
var string
Returns
TypeDescription
$this

getContainerSpec

Input only. The specification of the container that is to be used when deploying this Model. The specification is ingested upon ModelService.UploadModel, and all binaries it contains are copied and stored internally by Vertex AI.

Not present for AutoML Models or Large Models.

Returns
TypeDescription
Google\Cloud\AIPlatform\V1\ModelContainerSpec|null

hasContainerSpec

clearContainerSpec

setContainerSpec

Input only. The specification of the container that is to be used when deploying this Model. The specification is ingested upon ModelService.UploadModel, and all binaries it contains are copied and stored internally by Vertex AI.

Not present for AutoML Models or Large Models.

Parameter
NameDescription
var Google\Cloud\AIPlatform\V1\ModelContainerSpec
Returns
TypeDescription
$this

getArtifactUri

Immutable. The path to the directory containing the Model artifact and any of its supporting files. Not present for AutoML Models or Large Models.

Returns
TypeDescription
string

setArtifactUri

Immutable. The path to the directory containing the Model artifact and any of its supporting files. Not present for AutoML Models or Large Models.

Parameter
NameDescription
var string
Returns
TypeDescription
$this

getSupportedDeploymentResourcesTypes

Output only. When this Model is deployed, its prediction resources are described by the prediction_resources field of the Endpoint.deployed_models object. Because not all Models support all resource configuration types, the configuration types this Model supports are listed here. If no configuration types are listed, the Model cannot be deployed to an Endpoint and does not support online predictions (PredictionService.Predict or PredictionService.Explain).

Such a Model can serve predictions by using a BatchPredictionJob, if it has at least one entry each in supported_input_storage_formats and supported_output_storage_formats.

Returns
TypeDescription
Google\Protobuf\Internal\RepeatedField

setSupportedDeploymentResourcesTypes

Output only. When this Model is deployed, its prediction resources are described by the prediction_resources field of the Endpoint.deployed_models object. Because not all Models support all resource configuration types, the configuration types this Model supports are listed here. If no configuration types are listed, the Model cannot be deployed to an Endpoint and does not support online predictions (PredictionService.Predict or PredictionService.Explain).

Such a Model can serve predictions by using a BatchPredictionJob, if it has at least one entry each in supported_input_storage_formats and supported_output_storage_formats.

Parameter
NameDescription
var int[]
Returns
TypeDescription
$this

getSupportedInputStorageFormats

Output only. The formats this Model supports in BatchPredictionJob.input_config.

If PredictSchemata.instance_schema_uri exists, the instances should be given as per that schema. The possible formats are:

  • jsonl The JSON Lines format, where each instance is a single line. Uses GcsSource.
  • csv The CSV format, where each instance is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. Uses GcsSource.
  • tf-record The TFRecord format, where each instance is a single record in tfrecord syntax. Uses GcsSource.
  • tf-record-gzip Similar to tf-record, but the file is gzipped. Uses GcsSource.
  • bigquery Each instance is a single row in BigQuery. Uses BigQuerySource.
  • file-list Each line of the file is the location of an instance to process, uses gcs_source field of the InputConfig object. If this Model doesn't support any of these formats it means it cannot be used with a BatchPredictionJob. However, if it has supported_deployment_resources_types, it could serve online predictions by using PredictionService.Predict or PredictionService.Explain.
Returns
TypeDescription
Google\Protobuf\Internal\RepeatedField

setSupportedInputStorageFormats

Output only. The formats this Model supports in BatchPredictionJob.input_config.

If PredictSchemata.instance_schema_uri exists, the instances should be given as per that schema. The possible formats are:

  • jsonl The JSON Lines format, where each instance is a single line. Uses GcsSource.
  • csv The CSV format, where each instance is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. Uses GcsSource.
  • tf-record The TFRecord format, where each instance is a single record in tfrecord syntax. Uses GcsSource.
  • tf-record-gzip Similar to tf-record, but the file is gzipped. Uses GcsSource.
  • bigquery Each instance is a single row in BigQuery. Uses BigQuerySource.
  • file-list Each line of the file is the location of an instance to process, uses gcs_source field of the InputConfig object. If this Model doesn't support any of these formats it means it cannot be used with a BatchPredictionJob. However, if it has supported_deployment_resources_types, it could serve online predictions by using PredictionService.Predict or PredictionService.Explain.
Parameter
NameDescription
var string[]
Returns
TypeDescription
$this

getSupportedOutputStorageFormats

Output only. The formats this Model supports in BatchPredictionJob.output_config.

If both PredictSchemata.instance_schema_uri and PredictSchemata.prediction_schema_uri exist, the predictions are returned together with their instances. In other words, the prediction has the original instance data first, followed by the actual prediction content (as per the schema). The possible formats are:

  • jsonl The JSON Lines format, where each prediction is a single line. Uses GcsDestination.
  • csv The CSV format, where each prediction is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. Uses GcsDestination.
  • bigquery Each prediction is a single row in a BigQuery table, uses BigQueryDestination . If this Model doesn't support any of these formats it means it cannot be used with a BatchPredictionJob. However, if it has supported_deployment_resources_types, it could serve online predictions by using PredictionService.Predict or PredictionService.Explain.
Returns
TypeDescription
Google\Protobuf\Internal\RepeatedField

setSupportedOutputStorageFormats

Output only. The formats this Model supports in BatchPredictionJob.output_config.

If both PredictSchemata.instance_schema_uri and PredictSchemata.prediction_schema_uri exist, the predictions are returned together with their instances. In other words, the prediction has the original instance data first, followed by the actual prediction content (as per the schema). The possible formats are:

  • jsonl The JSON Lines format, where each prediction is a single line. Uses GcsDestination.
  • csv The CSV format, where each prediction is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. Uses GcsDestination.
  • bigquery Each prediction is a single row in a BigQuery table, uses BigQueryDestination . If this Model doesn't support any of these formats it means it cannot be used with a BatchPredictionJob. However, if it has supported_deployment_resources_types, it could serve online predictions by using PredictionService.Predict or PredictionService.Explain.
Parameter
NameDescription
var string[]
Returns
TypeDescription
$this

getCreateTime

Output only. Timestamp when this Model was uploaded into Vertex AI.

Returns
TypeDescription
Google\Protobuf\Timestamp|null

hasCreateTime

clearCreateTime

setCreateTime

Output only. Timestamp when this Model was uploaded into Vertex AI.

Parameter
NameDescription
var Google\Protobuf\Timestamp
Returns
TypeDescription
$this

getUpdateTime

Output only. Timestamp when this Model was most recently updated.

Returns
TypeDescription
Google\Protobuf\Timestamp|null

hasUpdateTime

clearUpdateTime

setUpdateTime

Output only. Timestamp when this Model was most recently updated.

Parameter
NameDescription
var Google\Protobuf\Timestamp
Returns
TypeDescription
$this

getDeployedModels

Output only. The pointers to DeployedModels created from this Model. Note that Model could have been deployed to Endpoints in different Locations.

Returns
TypeDescription
Google\Protobuf\Internal\RepeatedField

setDeployedModels

Output only. The pointers to DeployedModels created from this Model. Note that Model could have been deployed to Endpoints in different Locations.

Parameter
NameDescription
var array<Google\Cloud\AIPlatform\V1\DeployedModelRef>
Returns
TypeDescription
$this

getExplanationSpec

The default explanation specification for this Model.

The Model can be used for requesting explanation after being deployed if it is populated. The Model can be used for batch explanation if it is populated. All fields of the explanation_spec can be overridden by explanation_spec of DeployModelRequest.deployed_model, or explanation_spec of BatchPredictionJob. If the default explanation specification is not set for this Model, this Model can still be used for requesting explanation by setting explanation_spec of DeployModelRequest.deployed_model and for batch explanation by setting explanation_spec of BatchPredictionJob.

Returns
TypeDescription
Google\Cloud\AIPlatform\V1\ExplanationSpec|null

hasExplanationSpec

clearExplanationSpec

setExplanationSpec

The default explanation specification for this Model.

The Model can be used for requesting explanation after being deployed if it is populated. The Model can be used for batch explanation if it is populated. All fields of the explanation_spec can be overridden by explanation_spec of DeployModelRequest.deployed_model, or explanation_spec of BatchPredictionJob. If the default explanation specification is not set for this Model, this Model can still be used for requesting explanation by setting explanation_spec of DeployModelRequest.deployed_model and for batch explanation by setting explanation_spec of BatchPredictionJob.

Parameter
NameDescription
var Google\Cloud\AIPlatform\V1\ExplanationSpec
Returns
TypeDescription
$this

getEtag

Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

Returns
TypeDescription
string

setEtag

Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

Parameter
NameDescription
var string
Returns
TypeDescription
$this

getLabels

The labels with user-defined metadata to organize your Models.

Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.

Returns
TypeDescription
Google\Protobuf\Internal\MapField

setLabels

The labels with user-defined metadata to organize your Models.

Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.

Parameter
NameDescription
var array|Google\Protobuf\Internal\MapField
Returns
TypeDescription
$this

getDataStats

Stats of data used for training or evaluating the Model.

Only populated when the Model is trained by a TrainingPipeline with data_input_config.

Returns
TypeDescription
Google\Cloud\AIPlatform\V1\Model\DataStats|null

hasDataStats

clearDataStats

setDataStats

Stats of data used for training or evaluating the Model.

Only populated when the Model is trained by a TrainingPipeline with data_input_config.

Parameter
NameDescription
var Google\Cloud\AIPlatform\V1\Model\DataStats
Returns
TypeDescription
$this

getEncryptionSpec

Customer-managed encryption key spec for a Model. If set, this Model and all sub-resources of this Model will be secured by this key.

Returns
TypeDescription
Google\Cloud\AIPlatform\V1\EncryptionSpec|null

hasEncryptionSpec

clearEncryptionSpec

setEncryptionSpec

Customer-managed encryption key spec for a Model. If set, this Model and all sub-resources of this Model will be secured by this key.

Parameter
NameDescription
var Google\Cloud\AIPlatform\V1\EncryptionSpec
Returns
TypeDescription
$this

getModelSourceInfo

Output only. Source of a model. It can either be automl training pipeline, custom training pipeline, BigQuery ML, or existing Vertex AI Model.

Returns
TypeDescription
Google\Cloud\AIPlatform\V1\ModelSourceInfo|null

hasModelSourceInfo

clearModelSourceInfo

setModelSourceInfo

Output only. Source of a model. It can either be automl training pipeline, custom training pipeline, BigQuery ML, or existing Vertex AI Model.

Parameter
NameDescription
var Google\Cloud\AIPlatform\V1\ModelSourceInfo
Returns
TypeDescription
$this

getOriginalModelInfo

Output only. If this Model is a copy of another Model, this contains info about the original.

Returns
TypeDescription
Google\Cloud\AIPlatform\V1\Model\OriginalModelInfo|null

hasOriginalModelInfo

clearOriginalModelInfo

setOriginalModelInfo

Output only. If this Model is a copy of another Model, this contains info about the original.

Parameter
NameDescription
var Google\Cloud\AIPlatform\V1\Model\OriginalModelInfo
Returns
TypeDescription
$this

getMetadataArtifact

Output only. The resource name of the Artifact that was created in MetadataStore when creating the Model. The Artifact resource name pattern is projects/{project}/locations/{location}/metadataStores/{metadata_store}/artifacts/{artifact}.

Returns
TypeDescription
string

setMetadataArtifact

Output only. The resource name of the Artifact that was created in MetadataStore when creating the Model. The Artifact resource name pattern is projects/{project}/locations/{location}/metadataStores/{metadata_store}/artifacts/{artifact}.

Parameter
NameDescription
var string
Returns
TypeDescription
$this