Vertex AI V1 API - Class Google::Cloud::AIPlatform::V1::DeployedModel (v0.9.1)

Stay organized with collections Save and categorize content based on your preferences.

Reference documentation and code samples for the Vertex AI V1 API class Google::Cloud::AIPlatform::V1::DeployedModel.

A deployment of a Model. Endpoints contain one or more DeployedModels.

Inherits

  • Object

Extended By

  • Google::Protobuf::MessageExts::ClassMethods

Includes

  • Google::Protobuf::MessageExts

Methods

#automatic_resources

def automatic_resources() -> ::Google::Cloud::AIPlatform::V1::AutomaticResources
Returns

#automatic_resources=

def automatic_resources=(value) -> ::Google::Cloud::AIPlatform::V1::AutomaticResources
Parameter
Returns

#create_time

def create_time() -> ::Google::Protobuf::Timestamp
Returns

#dedicated_resources

def dedicated_resources() -> ::Google::Cloud::AIPlatform::V1::DedicatedResources
Returns

#dedicated_resources=

def dedicated_resources=(value) -> ::Google::Cloud::AIPlatform::V1::DedicatedResources
Parameter
Returns

#disable_container_logging

def disable_container_logging() -> ::Boolean
Returns
  • (::Boolean) — For custom-trained Models and AutoML Tabular Models, the container of the DeployedModel instances will send stderr and stdout streams to Stackdriver Logging by default. Please note that the logs incur cost, which are subject to Cloud Logging pricing.

    User can disable container logging by setting this flag to true.

#disable_container_logging=

def disable_container_logging=(value) -> ::Boolean
Parameter
  • value (::Boolean) — For custom-trained Models and AutoML Tabular Models, the container of the DeployedModel instances will send stderr and stdout streams to Stackdriver Logging by default. Please note that the logs incur cost, which are subject to Cloud Logging pricing.

    User can disable container logging by setting this flag to true.

Returns
  • (::Boolean) — For custom-trained Models and AutoML Tabular Models, the container of the DeployedModel instances will send stderr and stdout streams to Stackdriver Logging by default. Please note that the logs incur cost, which are subject to Cloud Logging pricing.

    User can disable container logging by setting this flag to true.

#display_name

def display_name() -> ::String
Returns
  • (::String) — The display name of the DeployedModel. If not provided upon creation, the Model's display_name is used.

#display_name=

def display_name=(value) -> ::String
Parameter
  • value (::String) — The display name of the DeployedModel. If not provided upon creation, the Model's display_name is used.
Returns
  • (::String) — The display name of the DeployedModel. If not provided upon creation, the Model's display_name is used.

#enable_access_logging

def enable_access_logging() -> ::Boolean
Returns
  • (::Boolean) — These logs are like standard server access logs, containing information like timestamp and latency for each prediction request.

    Note that Stackdriver logs may incur a cost, especially if your project receives prediction requests at a high queries per second rate (QPS). Estimate your costs before enabling this option.

#enable_access_logging=

def enable_access_logging=(value) -> ::Boolean
Parameter
  • value (::Boolean) — These logs are like standard server access logs, containing information like timestamp and latency for each prediction request.

    Note that Stackdriver logs may incur a cost, especially if your project receives prediction requests at a high queries per second rate (QPS). Estimate your costs before enabling this option.

Returns
  • (::Boolean) — These logs are like standard server access logs, containing information like timestamp and latency for each prediction request.

    Note that Stackdriver logs may incur a cost, especially if your project receives prediction requests at a high queries per second rate (QPS). Estimate your costs before enabling this option.

#explanation_spec

def explanation_spec() -> ::Google::Cloud::AIPlatform::V1::ExplanationSpec
Returns

#explanation_spec=

def explanation_spec=(value) -> ::Google::Cloud::AIPlatform::V1::ExplanationSpec
Parameter
Returns

#id

def id() -> ::String
Returns
  • (::String) — Immutable. The ID of the DeployedModel. If not provided upon deployment, Vertex AI will generate a value for this ID.

    This value should be 1-10 characters, and valid characters are /[0-9]/.

#id=

def id=(value) -> ::String
Parameter
  • value (::String) — Immutable. The ID of the DeployedModel. If not provided upon deployment, Vertex AI will generate a value for this ID.

    This value should be 1-10 characters, and valid characters are /[0-9]/.

Returns
  • (::String) — Immutable. The ID of the DeployedModel. If not provided upon deployment, Vertex AI will generate a value for this ID.

    This value should be 1-10 characters, and valid characters are /[0-9]/.

#model

def model() -> ::String
Returns
  • (::String) — Required. The resource name of the Model that this is the deployment of. Note that the Model may be in a different location than the DeployedModel's Endpoint.

    The resource name may contain version id or version alias to specify the version, if no version is specified, the default version will be deployed.

#model=

def model=(value) -> ::String
Parameter
  • value (::String) — Required. The resource name of the Model that this is the deployment of. Note that the Model may be in a different location than the DeployedModel's Endpoint.

    The resource name may contain version id or version alias to specify the version, if no version is specified, the default version will be deployed.

Returns
  • (::String) — Required. The resource name of the Model that this is the deployment of. Note that the Model may be in a different location than the DeployedModel's Endpoint.

    The resource name may contain version id or version alias to specify the version, if no version is specified, the default version will be deployed.

#model_version_id

def model_version_id() -> ::String
Returns
  • (::String) — Output only. The version ID of the model that is deployed.

#private_endpoints

def private_endpoints() -> ::Google::Cloud::AIPlatform::V1::PrivateEndpoints
Returns

#service_account

def service_account() -> ::String
Returns
  • (::String) — The service account that the DeployedModel's container runs as. Specify the email address of the service account. If this service account is not specified, the container runs as a service account that doesn't have access to the resource project.

    Users deploying the Model must have the iam.serviceAccounts.actAs permission on this service account.

#service_account=

def service_account=(value) -> ::String
Parameter
  • value (::String) — The service account that the DeployedModel's container runs as. Specify the email address of the service account. If this service account is not specified, the container runs as a service account that doesn't have access to the resource project.

    Users deploying the Model must have the iam.serviceAccounts.actAs permission on this service account.

Returns
  • (::String) — The service account that the DeployedModel's container runs as. Specify the email address of the service account. If this service account is not specified, the container runs as a service account that doesn't have access to the resource project.

    Users deploying the Model must have the iam.serviceAccounts.actAs permission on this service account.