- 3.56.0 (latest)
- 3.55.0
- 3.54.0
- 3.53.0
- 3.52.0
- 3.50.0
- 3.49.0
- 3.48.0
- 3.47.0
- 3.46.0
- 3.45.0
- 3.44.0
- 3.43.0
- 3.42.0
- 3.41.0
- 3.40.0
- 3.38.0
- 3.37.0
- 3.36.0
- 3.35.0
- 3.34.0
- 3.33.0
- 3.32.0
- 3.31.0
- 3.30.0
- 3.29.0
- 3.28.0
- 3.25.0
- 3.24.0
- 3.23.0
- 3.22.0
- 3.21.0
- 3.20.0
- 3.19.0
- 3.18.0
- 3.17.0
- 3.16.0
- 3.15.0
- 3.14.0
- 3.13.0
- 3.12.0
- 3.11.0
- 3.10.0
- 3.9.0
- 3.8.0
- 3.7.0
- 3.6.0
- 3.5.0
- 3.4.2
- 3.3.0
- 3.2.0
- 3.0.0
- 2.9.8
- 2.8.9
- 2.7.4
- 2.5.3
- 2.4.0
public interface DedicatedResourcesOrBuilder extends MessageOrBuilder
Implements
MessageOrBuilderMethods
getAutoscalingMetricSpecs(int index)
public abstract AutoscalingMetricSpec getAutoscalingMetricSpecs(int index)
Immutable. The metric specifications that overrides a resource
utilization metric (CPU utilization, accelerator's duty cycle, and so on)
target value (default to 60 if not set). At most one entry is allowed per
metric.
If
machine_spec.accelerator_count
is above 0, the autoscaling will be based on both CPU utilization and
accelerator's duty cycle metrics and scale up when either metrics exceeds
its target value while scale down if both metrics are under their target
value. The default target value is 60 for both metrics.
If
machine_spec.accelerator_count
is 0, the autoscaling will be based on CPU utilization metric only with
default target value 60 if not explicitly set.
For example, in the case of Online Prediction, if you want to override
target CPU utilization to 80, you should set
autoscaling_metric_specs.metric_name
to aiplatform.googleapis.com/prediction/online/cpu/utilization
and
autoscaling_metric_specs.target
to 80
.
repeated .google.cloud.aiplatform.v1.AutoscalingMetricSpec autoscaling_metric_specs = 4 [(.google.api.field_behavior) = IMMUTABLE];
Parameter | |
---|---|
Name | Description |
index | int |
Returns | |
---|---|
Type | Description |
AutoscalingMetricSpec |
getAutoscalingMetricSpecsCount()
public abstract int getAutoscalingMetricSpecsCount()
Immutable. The metric specifications that overrides a resource
utilization metric (CPU utilization, accelerator's duty cycle, and so on)
target value (default to 60 if not set). At most one entry is allowed per
metric.
If
machine_spec.accelerator_count
is above 0, the autoscaling will be based on both CPU utilization and
accelerator's duty cycle metrics and scale up when either metrics exceeds
its target value while scale down if both metrics are under their target
value. The default target value is 60 for both metrics.
If
machine_spec.accelerator_count
is 0, the autoscaling will be based on CPU utilization metric only with
default target value 60 if not explicitly set.
For example, in the case of Online Prediction, if you want to override
target CPU utilization to 80, you should set
autoscaling_metric_specs.metric_name
to aiplatform.googleapis.com/prediction/online/cpu/utilization
and
autoscaling_metric_specs.target
to 80
.
repeated .google.cloud.aiplatform.v1.AutoscalingMetricSpec autoscaling_metric_specs = 4 [(.google.api.field_behavior) = IMMUTABLE];
Returns | |
---|---|
Type | Description |
int |
getAutoscalingMetricSpecsList()
public abstract List<AutoscalingMetricSpec> getAutoscalingMetricSpecsList()
Immutable. The metric specifications that overrides a resource
utilization metric (CPU utilization, accelerator's duty cycle, and so on)
target value (default to 60 if not set). At most one entry is allowed per
metric.
If
machine_spec.accelerator_count
is above 0, the autoscaling will be based on both CPU utilization and
accelerator's duty cycle metrics and scale up when either metrics exceeds
its target value while scale down if both metrics are under their target
value. The default target value is 60 for both metrics.
If
machine_spec.accelerator_count
is 0, the autoscaling will be based on CPU utilization metric only with
default target value 60 if not explicitly set.
For example, in the case of Online Prediction, if you want to override
target CPU utilization to 80, you should set
autoscaling_metric_specs.metric_name
to aiplatform.googleapis.com/prediction/online/cpu/utilization
and
autoscaling_metric_specs.target
to 80
.
repeated .google.cloud.aiplatform.v1.AutoscalingMetricSpec autoscaling_metric_specs = 4 [(.google.api.field_behavior) = IMMUTABLE];
Returns | |
---|---|
Type | Description |
List<AutoscalingMetricSpec> |
getAutoscalingMetricSpecsOrBuilder(int index)
public abstract AutoscalingMetricSpecOrBuilder getAutoscalingMetricSpecsOrBuilder(int index)
Immutable. The metric specifications that overrides a resource
utilization metric (CPU utilization, accelerator's duty cycle, and so on)
target value (default to 60 if not set). At most one entry is allowed per
metric.
If
machine_spec.accelerator_count
is above 0, the autoscaling will be based on both CPU utilization and
accelerator's duty cycle metrics and scale up when either metrics exceeds
its target value while scale down if both metrics are under their target
value. The default target value is 60 for both metrics.
If
machine_spec.accelerator_count
is 0, the autoscaling will be based on CPU utilization metric only with
default target value 60 if not explicitly set.
For example, in the case of Online Prediction, if you want to override
target CPU utilization to 80, you should set
autoscaling_metric_specs.metric_name
to aiplatform.googleapis.com/prediction/online/cpu/utilization
and
autoscaling_metric_specs.target
to 80
.
repeated .google.cloud.aiplatform.v1.AutoscalingMetricSpec autoscaling_metric_specs = 4 [(.google.api.field_behavior) = IMMUTABLE];
Parameter | |
---|---|
Name | Description |
index | int |
Returns | |
---|---|
Type | Description |
AutoscalingMetricSpecOrBuilder |
getAutoscalingMetricSpecsOrBuilderList()
public abstract List<? extends AutoscalingMetricSpecOrBuilder> getAutoscalingMetricSpecsOrBuilderList()
Immutable. The metric specifications that overrides a resource
utilization metric (CPU utilization, accelerator's duty cycle, and so on)
target value (default to 60 if not set). At most one entry is allowed per
metric.
If
machine_spec.accelerator_count
is above 0, the autoscaling will be based on both CPU utilization and
accelerator's duty cycle metrics and scale up when either metrics exceeds
its target value while scale down if both metrics are under their target
value. The default target value is 60 for both metrics.
If
machine_spec.accelerator_count
is 0, the autoscaling will be based on CPU utilization metric only with
default target value 60 if not explicitly set.
For example, in the case of Online Prediction, if you want to override
target CPU utilization to 80, you should set
autoscaling_metric_specs.metric_name
to aiplatform.googleapis.com/prediction/online/cpu/utilization
and
autoscaling_metric_specs.target
to 80
.
repeated .google.cloud.aiplatform.v1.AutoscalingMetricSpec autoscaling_metric_specs = 4 [(.google.api.field_behavior) = IMMUTABLE];
Returns | |
---|---|
Type | Description |
List<? extends com.google.cloud.aiplatform.v1.AutoscalingMetricSpecOrBuilder> |
getMachineSpec()
public abstract MachineSpec getMachineSpec()
Required. Immutable. The specification of a single machine used by the prediction.
.google.cloud.aiplatform.v1.MachineSpec machine_spec = 1 [(.google.api.field_behavior) = REQUIRED, (.google.api.field_behavior) = IMMUTABLE];
Returns | |
---|---|
Type | Description |
MachineSpec | The machineSpec. |
getMachineSpecOrBuilder()
public abstract MachineSpecOrBuilder getMachineSpecOrBuilder()
Required. Immutable. The specification of a single machine used by the prediction.
.google.cloud.aiplatform.v1.MachineSpec machine_spec = 1 [(.google.api.field_behavior) = REQUIRED, (.google.api.field_behavior) = IMMUTABLE];
Returns | |
---|---|
Type | Description |
MachineSpecOrBuilder |
getMaxReplicaCount()
public abstract int getMaxReplicaCount()
Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
int32 max_replica_count = 3 [(.google.api.field_behavior) = IMMUTABLE];
Returns | |
---|---|
Type | Description |
int | The maxReplicaCount. |
getMinReplicaCount()
public abstract int getMinReplicaCount()
Required. Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
int32 min_replica_count = 2 [(.google.api.field_behavior) = REQUIRED, (.google.api.field_behavior) = IMMUTABLE];
Returns | |
---|---|
Type | Description |
int | The minReplicaCount. |
hasMachineSpec()
public abstract boolean hasMachineSpec()
Required. Immutable. The specification of a single machine used by the prediction.
.google.cloud.aiplatform.v1.MachineSpec machine_spec = 1 [(.google.api.field_behavior) = REQUIRED, (.google.api.field_behavior) = IMMUTABLE];
Returns | |
---|---|
Type | Description |
boolean | Whether the machineSpec field is set. |