Reference documentation and code samples for the Cloud AutoML V1beta1 API class Google::Cloud::AutoML::V1beta1::ImageClassificationModelMetadata.
Model metadata for image classification.
Inherits
- Object
Extended By
- Google::Protobuf::MessageExts::ClassMethods
Includes
- Google::Protobuf::MessageExts
Methods
#base_model_id
def base_model_id() -> ::String
Returns
-
(::String) — Optional. The ID of the
base
model. If it is specified, the new model will be created based on thebase
model. Otherwise, the new model will be created from scratch. Thebase
model must be in the sameproject
andlocation
as the new model to create, and have the samemodel_type
.
#base_model_id=
def base_model_id=(value) -> ::String
Parameter
-
value (::String) — Optional. The ID of the
base
model. If it is specified, the new model will be created based on thebase
model. Otherwise, the new model will be created from scratch. Thebase
model must be in the sameproject
andlocation
as the new model to create, and have the samemodel_type
.
Returns
-
(::String) — Optional. The ID of the
base
model. If it is specified, the new model will be created based on thebase
model. Otherwise, the new model will be created from scratch. Thebase
model must be in the sameproject
andlocation
as the new model to create, and have the samemodel_type
.
#model_type
def model_type() -> ::String
Returns
-
(::String) —
Optional. Type of the model. The available values are:
-
cloud
- Model to be used via prediction calls to AutoML API. This is the default value. -
mobile-low-latency-1
- A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile or edge device with TensorFlow afterwards. Expected to have low latency, but may have lower prediction quality than other models. -
mobile-versatile-1
- A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile or edge device with TensorFlow afterwards. -
mobile-high-accuracy-1
- A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile or edge device with TensorFlow afterwards. Expected to have a higher latency, but should also have a higher prediction quality than other models. -
mobile-core-ml-low-latency-1
- A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile device with Core ML afterwards. Expected to have low latency, but may have lower prediction quality than other models. -
mobile-core-ml-versatile-1
- A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile device with Core ML afterwards. -
mobile-core-ml-high-accuracy-1
- A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile device with Core ML afterwards. Expected to have a higher latency, but should also have a higher prediction quality than other models.
-
#model_type=
def model_type=(value) -> ::String
Parameter
-
value (::String) —
Optional. Type of the model. The available values are:
-
cloud
- Model to be used via prediction calls to AutoML API. This is the default value. -
mobile-low-latency-1
- A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile or edge device with TensorFlow afterwards. Expected to have low latency, but may have lower prediction quality than other models. -
mobile-versatile-1
- A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile or edge device with TensorFlow afterwards. -
mobile-high-accuracy-1
- A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile or edge device with TensorFlow afterwards. Expected to have a higher latency, but should also have a higher prediction quality than other models. -
mobile-core-ml-low-latency-1
- A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile device with Core ML afterwards. Expected to have low latency, but may have lower prediction quality than other models. -
mobile-core-ml-versatile-1
- A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile device with Core ML afterwards. -
mobile-core-ml-high-accuracy-1
- A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile device with Core ML afterwards. Expected to have a higher latency, but should also have a higher prediction quality than other models.
-
Returns
-
(::String) —
Optional. Type of the model. The available values are:
-
cloud
- Model to be used via prediction calls to AutoML API. This is the default value. -
mobile-low-latency-1
- A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile or edge device with TensorFlow afterwards. Expected to have low latency, but may have lower prediction quality than other models. -
mobile-versatile-1
- A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile or edge device with TensorFlow afterwards. -
mobile-high-accuracy-1
- A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile or edge device with TensorFlow afterwards. Expected to have a higher latency, but should also have a higher prediction quality than other models. -
mobile-core-ml-low-latency-1
- A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile device with Core ML afterwards. Expected to have low latency, but may have lower prediction quality than other models. -
mobile-core-ml-versatile-1
- A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile device with Core ML afterwards. -
mobile-core-ml-high-accuracy-1
- A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile device with Core ML afterwards. Expected to have a higher latency, but should also have a higher prediction quality than other models.
-
#node_count
def node_count() -> ::Integer
Returns
- (::Integer) — Output only. The number of nodes this model is deployed on. A node is an abstraction of a machine resource, which can handle online prediction QPS as given in the node_qps field.
#node_count=
def node_count=(value) -> ::Integer
Parameter
- value (::Integer) — Output only. The number of nodes this model is deployed on. A node is an abstraction of a machine resource, which can handle online prediction QPS as given in the node_qps field.
Returns
- (::Integer) — Output only. The number of nodes this model is deployed on. A node is an abstraction of a machine resource, which can handle online prediction QPS as given in the node_qps field.
#node_qps
def node_qps() -> ::Float
Returns
- (::Float) — Output only. An approximate number of online prediction QPS that can be supported by this model per each node on which it is deployed.
#node_qps=
def node_qps=(value) -> ::Float
Parameter
- value (::Float) — Output only. An approximate number of online prediction QPS that can be supported by this model per each node on which it is deployed.
Returns
- (::Float) — Output only. An approximate number of online prediction QPS that can be supported by this model per each node on which it is deployed.
#stop_reason
def stop_reason() -> ::String
Returns
-
(::String) — Output only. The reason that this create model operation stopped,
e.g.
BUDGET_REACHED
,MODEL_CONVERGED
.
#stop_reason=
def stop_reason=(value) -> ::String
Parameter
-
value (::String) — Output only. The reason that this create model operation stopped,
e.g.
BUDGET_REACHED
,MODEL_CONVERGED
.
Returns
-
(::String) — Output only. The reason that this create model operation stopped,
e.g.
BUDGET_REACHED
,MODEL_CONVERGED
.
#train_budget
def train_budget() -> ::Integer
Returns
-
(::Integer) — Required. The train budget of creating this model, expressed in hours. The
actual
train_cost
will be equal or less than this value.
#train_budget=
def train_budget=(value) -> ::Integer
Parameter
-
value (::Integer) — Required. The train budget of creating this model, expressed in hours. The
actual
train_cost
will be equal or less than this value.
Returns
-
(::Integer) — Required. The train budget of creating this model, expressed in hours. The
actual
train_cost
will be equal or less than this value.
#train_cost
def train_cost() -> ::Integer
Returns
-
(::Integer) — Output only. The actual train cost of creating this model, expressed in
hours. If this model is created from a
base
model, the train cost used to create thebase
model are not included.
#train_cost=
def train_cost=(value) -> ::Integer
Parameter
-
value (::Integer) — Output only. The actual train cost of creating this model, expressed in
hours. If this model is created from a
base
model, the train cost used to create thebase
model are not included.
Returns
-
(::Integer) — Output only. The actual train cost of creating this model, expressed in
hours. If this model is created from a
base
model, the train cost used to create thebase
model are not included.