Cloud AutoML V1beta1 API - Class Google::Cloud::AutoML::V1beta1::ImageObjectDetectionModelMetadata (v0.11.0)

Reference documentation and code samples for the Cloud AutoML V1beta1 API class Google::Cloud::AutoML::V1beta1::ImageObjectDetectionModelMetadata.

Model metadata specific to image object detection.

Inherits

  • Object

Extended By

  • Google::Protobuf::MessageExts::ClassMethods

Includes

  • Google::Protobuf::MessageExts

Methods

#model_type

def model_type() -> ::String
Returns
  • (::String) —

    Optional. Type of the model. The available values are:

    • cloud-high-accuracy-1 - (default) A model to be used via prediction calls to AutoML API. Expected to have a higher latency, but should also have a higher prediction quality than other models.
    • cloud-low-latency-1 - A model to be used via prediction calls to AutoML API. Expected to have low latency, but may have lower prediction quality than other models.
    • mobile-low-latency-1 - A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile or edge device with TensorFlow afterwards. Expected to have low latency, but may have lower prediction quality than other models.
    • mobile-versatile-1 - A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile or edge device with TensorFlow afterwards.
    • mobile-high-accuracy-1 - A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile or edge device with TensorFlow afterwards. Expected to have a higher latency, but should also have a higher prediction quality than other models.

#model_type=

def model_type=(value) -> ::String
Parameter
  • value (::String) —

    Optional. Type of the model. The available values are:

    • cloud-high-accuracy-1 - (default) A model to be used via prediction calls to AutoML API. Expected to have a higher latency, but should also have a higher prediction quality than other models.
    • cloud-low-latency-1 - A model to be used via prediction calls to AutoML API. Expected to have low latency, but may have lower prediction quality than other models.
    • mobile-low-latency-1 - A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile or edge device with TensorFlow afterwards. Expected to have low latency, but may have lower prediction quality than other models.
    • mobile-versatile-1 - A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile or edge device with TensorFlow afterwards.
    • mobile-high-accuracy-1 - A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile or edge device with TensorFlow afterwards. Expected to have a higher latency, but should also have a higher prediction quality than other models.
Returns
  • (::String) —

    Optional. Type of the model. The available values are:

    • cloud-high-accuracy-1 - (default) A model to be used via prediction calls to AutoML API. Expected to have a higher latency, but should also have a higher prediction quality than other models.
    • cloud-low-latency-1 - A model to be used via prediction calls to AutoML API. Expected to have low latency, but may have lower prediction quality than other models.
    • mobile-low-latency-1 - A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile or edge device with TensorFlow afterwards. Expected to have low latency, but may have lower prediction quality than other models.
    • mobile-versatile-1 - A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile or edge device with TensorFlow afterwards.
    • mobile-high-accuracy-1 - A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile or edge device with TensorFlow afterwards. Expected to have a higher latency, but should also have a higher prediction quality than other models.

#node_count

def node_count() -> ::Integer
Returns
  • (::Integer) — Output only. The number of nodes this model is deployed on. A node is an abstraction of a machine resource, which can handle online prediction QPS as given in the qps_per_node field.

#node_count=

def node_count=(value) -> ::Integer
Parameter
  • value (::Integer) — Output only. The number of nodes this model is deployed on. A node is an abstraction of a machine resource, which can handle online prediction QPS as given in the qps_per_node field.
Returns
  • (::Integer) — Output only. The number of nodes this model is deployed on. A node is an abstraction of a machine resource, which can handle online prediction QPS as given in the qps_per_node field.

#node_qps

def node_qps() -> ::Float
Returns
  • (::Float) — Output only. An approximate number of online prediction QPS that can be supported by this model per each node on which it is deployed.

#node_qps=

def node_qps=(value) -> ::Float
Parameter
  • value (::Float) — Output only. An approximate number of online prediction QPS that can be supported by this model per each node on which it is deployed.
Returns
  • (::Float) — Output only. An approximate number of online prediction QPS that can be supported by this model per each node on which it is deployed.

#stop_reason

def stop_reason() -> ::String
Returns
  • (::String) — Output only. The reason that this create model operation stopped, e.g. BUDGET_REACHED, MODEL_CONVERGED.

#stop_reason=

def stop_reason=(value) -> ::String
Parameter
  • value (::String) — Output only. The reason that this create model operation stopped, e.g. BUDGET_REACHED, MODEL_CONVERGED.
Returns
  • (::String) — Output only. The reason that this create model operation stopped, e.g. BUDGET_REACHED, MODEL_CONVERGED.

#train_budget_milli_node_hours

def train_budget_milli_node_hours() -> ::Integer
Returns
  • (::Integer) — The train budget of creating this model, expressed in milli node hours i.e. 1,000 value in this field means 1 node hour. The actual train_cost will be equal or less than this value. If further model training ceases to provide any improvements, it will stop without using full budget and the stop_reason will be MODEL_CONVERGED. Note, node_hour = actual_hour * number_of_nodes_invovled. For model type cloud-high-accuracy-1(default) and cloud-low-latency-1, the train budget must be between 20,000 and 900,000 milli node hours, inclusive. The default value is 216, 000 which represents one day in wall time. For model type mobile-low-latency-1, mobile-versatile-1, mobile-high-accuracy-1, mobile-core-ml-low-latency-1, mobile-core-ml-versatile-1, mobile-core-ml-high-accuracy-1, the train budget must be between 1,000 and 100,000 milli node hours, inclusive. The default value is 24, 000 which represents one day in wall time.

#train_budget_milli_node_hours=

def train_budget_milli_node_hours=(value) -> ::Integer
Parameter
  • value (::Integer) — The train budget of creating this model, expressed in milli node hours i.e. 1,000 value in this field means 1 node hour. The actual train_cost will be equal or less than this value. If further model training ceases to provide any improvements, it will stop without using full budget and the stop_reason will be MODEL_CONVERGED. Note, node_hour = actual_hour * number_of_nodes_invovled. For model type cloud-high-accuracy-1(default) and cloud-low-latency-1, the train budget must be between 20,000 and 900,000 milli node hours, inclusive. The default value is 216, 000 which represents one day in wall time. For model type mobile-low-latency-1, mobile-versatile-1, mobile-high-accuracy-1, mobile-core-ml-low-latency-1, mobile-core-ml-versatile-1, mobile-core-ml-high-accuracy-1, the train budget must be between 1,000 and 100,000 milli node hours, inclusive. The default value is 24, 000 which represents one day in wall time.
Returns
  • (::Integer) — The train budget of creating this model, expressed in milli node hours i.e. 1,000 value in this field means 1 node hour. The actual train_cost will be equal or less than this value. If further model training ceases to provide any improvements, it will stop without using full budget and the stop_reason will be MODEL_CONVERGED. Note, node_hour = actual_hour * number_of_nodes_invovled. For model type cloud-high-accuracy-1(default) and cloud-low-latency-1, the train budget must be between 20,000 and 900,000 milli node hours, inclusive. The default value is 216, 000 which represents one day in wall time. For model type mobile-low-latency-1, mobile-versatile-1, mobile-high-accuracy-1, mobile-core-ml-low-latency-1, mobile-core-ml-versatile-1, mobile-core-ml-high-accuracy-1, the train budget must be between 1,000 and 100,000 milli node hours, inclusive. The default value is 24, 000 which represents one day in wall time.

#train_cost_milli_node_hours

def train_cost_milli_node_hours() -> ::Integer
Returns
  • (::Integer) — Output only. The actual train cost of creating this model, expressed in milli node hours, i.e. 1,000 value in this field means 1 node hour. Guaranteed to not exceed the train budget.

#train_cost_milli_node_hours=

def train_cost_milli_node_hours=(value) -> ::Integer
Parameter
  • value (::Integer) — Output only. The actual train cost of creating this model, expressed in milli node hours, i.e. 1,000 value in this field means 1 node hour. Guaranteed to not exceed the train budget.
Returns
  • (::Integer) — Output only. The actual train cost of creating this model, expressed in milli node hours, i.e. 1,000 value in this field means 1 node hour. Guaranteed to not exceed the train budget.