Interface ImageClassificationModelMetadataOrBuilder (2.2.3)

public interface ImageClassificationModelMetadataOrBuilder extends MessageOrBuilder

Implements

MessageOrBuilder

Methods

getBaseModelId()

public abstract String getBaseModelId()

Optional. The ID of the base model. If it is specified, the new model will be created based on the base model. Otherwise, the new model will be created from scratch. The base model must be in the same project and location as the new model to create, and have the same model_type.

string base_model_id = 1;

Returns
TypeDescription
String

The baseModelId.

getBaseModelIdBytes()

public abstract ByteString getBaseModelIdBytes()

Optional. The ID of the base model. If it is specified, the new model will be created based on the base model. Otherwise, the new model will be created from scratch. The base model must be in the same project and location as the new model to create, and have the same model_type.

string base_model_id = 1;

Returns
TypeDescription
ByteString

The bytes for baseModelId.

getModelType()

public abstract String getModelType()

Optional. Type of the model. The available values are:

  • cloud - Model to be used via prediction calls to AutoML API. This is the default value.
  • mobile-low-latency-1 - A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile or edge device with TensorFlow afterwards. Expected to have low latency, but may have lower prediction quality than other models.
  • mobile-versatile-1 - A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile or edge device with TensorFlow afterwards.
  • mobile-high-accuracy-1 - A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile or edge device with TensorFlow afterwards. Expected to have a higher latency, but should also have a higher prediction quality than other models.
  • mobile-core-ml-low-latency-1 - A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile device with Core ML afterwards. Expected to have low latency, but may have lower prediction quality than other models.
  • mobile-core-ml-versatile-1 - A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile device with Core ML afterwards.
  • mobile-core-ml-high-accuracy-1 - A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile device with Core ML afterwards. Expected to have a higher latency, but should also have a higher prediction quality than other models.

string model_type = 7;

Returns
TypeDescription
String

The modelType.

getModelTypeBytes()

public abstract ByteString getModelTypeBytes()

Optional. Type of the model. The available values are:

  • cloud - Model to be used via prediction calls to AutoML API. This is the default value.
  • mobile-low-latency-1 - A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile or edge device with TensorFlow afterwards. Expected to have low latency, but may have lower prediction quality than other models.
  • mobile-versatile-1 - A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile or edge device with TensorFlow afterwards.
  • mobile-high-accuracy-1 - A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile or edge device with TensorFlow afterwards. Expected to have a higher latency, but should also have a higher prediction quality than other models.
  • mobile-core-ml-low-latency-1 - A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile device with Core ML afterwards. Expected to have low latency, but may have lower prediction quality than other models.
  • mobile-core-ml-versatile-1 - A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile device with Core ML afterwards.
  • mobile-core-ml-high-accuracy-1 - A model that, in addition to providing prediction via AutoML API, can also be exported (see AutoMl.ExportModel) and used on a mobile device with Core ML afterwards. Expected to have a higher latency, but should also have a higher prediction quality than other models.

string model_type = 7;

Returns
TypeDescription
ByteString

The bytes for modelType.

getNodeCount()

public abstract long getNodeCount()

Output only. The number of nodes this model is deployed on. A node is an abstraction of a machine resource, which can handle online prediction QPS as given in the node_qps field.

int64 node_count = 14;

Returns
TypeDescription
long

The nodeCount.

getNodeQps()

public abstract double getNodeQps()

Output only. An approximate number of online prediction QPS that can be supported by this model per each node on which it is deployed.

double node_qps = 13;

Returns
TypeDescription
double

The nodeQps.

getStopReason()

public abstract String getStopReason()

Output only. The reason that this create model operation stopped, e.g. BUDGET_REACHED, MODEL_CONVERGED.

string stop_reason = 5;

Returns
TypeDescription
String

The stopReason.

getStopReasonBytes()

public abstract ByteString getStopReasonBytes()

Output only. The reason that this create model operation stopped, e.g. BUDGET_REACHED, MODEL_CONVERGED.

string stop_reason = 5;

Returns
TypeDescription
ByteString

The bytes for stopReason.

getTrainBudget()

public abstract long getTrainBudget()

Required. The train budget of creating this model, expressed in hours. The actual train_cost will be equal or less than this value.

int64 train_budget = 2;

Returns
TypeDescription
long

The trainBudget.

getTrainCost()

public abstract long getTrainCost()

Output only. The actual train cost of creating this model, expressed in hours. If this model is created from a base model, the train cost used to create the base model are not included.

int64 train_cost = 3;

Returns
TypeDescription
long

The trainCost.