Optional. The ID of the base model. If it is specified, the new model
will be created based on the base model. Otherwise, the new model will be
created from scratch. The base model must be in the same
project and location as the new model to create, and have the same
model_type.
Optional. The ID of the base model. If it is specified, the new model
will be created based on the base model. Otherwise, the new model will be
created from scratch. The base model must be in the same
project and location as the new model to create, and have the same
model_type.
Optional. Type of the model. The available values are:
cloud - Model to be used via prediction calls to AutoML API.
This is the default value.
mobile-low-latency-1 - A model that, in addition to providing
prediction via AutoML API, can also be exported (see
AutoMl.ExportModel) and used on a mobile or edge device
with TensorFlow afterwards. Expected to have low latency, but
may have lower prediction quality than other models.
mobile-versatile-1 - A model that, in addition to providing
prediction via AutoML API, can also be exported (see
AutoMl.ExportModel) and used on a mobile or edge device
with TensorFlow afterwards.
mobile-high-accuracy-1 - A model that, in addition to providing
prediction via AutoML API, can also be exported (see
AutoMl.ExportModel) and used on a mobile or edge device
with TensorFlow afterwards. Expected to have a higher
latency, but should also have a higher prediction quality
than other models.
mobile-core-ml-low-latency-1 - A model that, in addition to providing
prediction via AutoML API, can also be exported (see
AutoMl.ExportModel) and used on a mobile device with Core
ML afterwards. Expected to have low latency, but may have
lower prediction quality than other models.
mobile-core-ml-versatile-1 - A model that, in addition to providing
prediction via AutoML API, can also be exported (see
AutoMl.ExportModel) and used on a mobile device with Core
ML afterwards.
mobile-core-ml-high-accuracy-1 - A model that, in addition to
providing prediction via AutoML API, can also be exported
(see AutoMl.ExportModel) and used on a mobile device with
Core ML afterwards. Expected to have a higher latency, but
should also have a higher prediction quality than other
models.
Optional. Type of the model. The available values are:
cloud - Model to be used via prediction calls to AutoML API.
This is the default value.
mobile-low-latency-1 - A model that, in addition to providing
prediction via AutoML API, can also be exported (see
AutoMl.ExportModel) and used on a mobile or edge device
with TensorFlow afterwards. Expected to have low latency, but
may have lower prediction quality than other models.
mobile-versatile-1 - A model that, in addition to providing
prediction via AutoML API, can also be exported (see
AutoMl.ExportModel) and used on a mobile or edge device
with TensorFlow afterwards.
mobile-high-accuracy-1 - A model that, in addition to providing
prediction via AutoML API, can also be exported (see
AutoMl.ExportModel) and used on a mobile or edge device
with TensorFlow afterwards. Expected to have a higher
latency, but should also have a higher prediction quality
than other models.
mobile-core-ml-low-latency-1 - A model that, in addition to providing
prediction via AutoML API, can also be exported (see
AutoMl.ExportModel) and used on a mobile device with Core
ML afterwards. Expected to have low latency, but may have
lower prediction quality than other models.
mobile-core-ml-versatile-1 - A model that, in addition to providing
prediction via AutoML API, can also be exported (see
AutoMl.ExportModel) and used on a mobile device with Core
ML afterwards.
mobile-core-ml-high-accuracy-1 - A model that, in addition to
providing prediction via AutoML API, can also be exported
(see AutoMl.ExportModel) and used on a mobile device with
Core ML afterwards. Expected to have a higher latency, but
should also have a higher prediction quality than other
models.
Output only. The number of nodes this model is deployed on. A node is an
abstraction of a machine resource, which can handle online prediction QPS
as given in the node_qps field.
Output only. The actual train cost of creating this model, expressed in
hours. If this model is created from a base model, the train cost used
to create the base model are not included.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-12-19 UTC."],[],[]]