Vertex AI V1 API - Class Google::Cloud::AIPlatform::V1::DeployedModel (v0.59.0)

Reference documentation and code samples for the Vertex AI V1 API class Google::Cloud::AIPlatform::V1::DeployedModel.

A deployment of a Model. Endpoints contain one or more DeployedModels.

Inherits

  • Object

Extended By

  • Google::Protobuf::MessageExts::ClassMethods

Includes

  • Google::Protobuf::MessageExts

Methods

#automatic_resources

def automatic_resources() -> ::Google::Cloud::AIPlatform::V1::AutomaticResources
Returns

#automatic_resources=

def automatic_resources=(value) -> ::Google::Cloud::AIPlatform::V1::AutomaticResources
Parameter
Returns

#create_time

def create_time() -> ::Google::Protobuf::Timestamp
Returns

#dedicated_resources

def dedicated_resources() -> ::Google::Cloud::AIPlatform::V1::DedicatedResources
Returns

#dedicated_resources=

def dedicated_resources=(value) -> ::Google::Cloud::AIPlatform::V1::DedicatedResources
Parameter
Returns

#disable_container_logging

def disable_container_logging() -> ::Boolean
Returns
  • (::Boolean) — For custom-trained Models and AutoML Tabular Models, the container of the DeployedModel instances will send stderr and stdout streams to Cloud Logging by default. Please note that the logs incur cost, which are subject to Cloud Logging pricing.

    User can disable container logging by setting this flag to true.

#disable_container_logging=

def disable_container_logging=(value) -> ::Boolean
Parameter
  • value (::Boolean) — For custom-trained Models and AutoML Tabular Models, the container of the DeployedModel instances will send stderr and stdout streams to Cloud Logging by default. Please note that the logs incur cost, which are subject to Cloud Logging pricing.

    User can disable container logging by setting this flag to true.

Returns
  • (::Boolean) — For custom-trained Models and AutoML Tabular Models, the container of the DeployedModel instances will send stderr and stdout streams to Cloud Logging by default. Please note that the logs incur cost, which are subject to Cloud Logging pricing.

    User can disable container logging by setting this flag to true.

#disable_explanations

def disable_explanations() -> ::Boolean
Returns

#disable_explanations=

def disable_explanations=(value) -> ::Boolean
Parameter
Returns

#display_name

def display_name() -> ::String
Returns
  • (::String) — The display name of the DeployedModel. If not provided upon creation, the Model's display_name is used.

#display_name=

def display_name=(value) -> ::String
Parameter
  • value (::String) — The display name of the DeployedModel. If not provided upon creation, the Model's display_name is used.
Returns
  • (::String) — The display name of the DeployedModel. If not provided upon creation, the Model's display_name is used.

#enable_access_logging

def enable_access_logging() -> ::Boolean
Returns
  • (::Boolean) — If true, online prediction access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each prediction request.

    Note that logs may incur a cost, especially if your project receives prediction requests at a high queries per second rate (QPS). Estimate your costs before enabling this option.

#enable_access_logging=

def enable_access_logging=(value) -> ::Boolean
Parameter
  • value (::Boolean) — If true, online prediction access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each prediction request.

    Note that logs may incur a cost, especially if your project receives prediction requests at a high queries per second rate (QPS). Estimate your costs before enabling this option.

Returns
  • (::Boolean) — If true, online prediction access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each prediction request.

    Note that logs may incur a cost, especially if your project receives prediction requests at a high queries per second rate (QPS). Estimate your costs before enabling this option.

#explanation_spec

def explanation_spec() -> ::Google::Cloud::AIPlatform::V1::ExplanationSpec
Returns

#explanation_spec=

def explanation_spec=(value) -> ::Google::Cloud::AIPlatform::V1::ExplanationSpec
Parameter
Returns

#faster_deployment_config

def faster_deployment_config() -> ::Google::Cloud::AIPlatform::V1::FasterDeploymentConfig
Returns

#faster_deployment_config=

def faster_deployment_config=(value) -> ::Google::Cloud::AIPlatform::V1::FasterDeploymentConfig
Parameter
Returns

#id

def id() -> ::String
Returns
  • (::String) — Immutable. The ID of the DeployedModel. If not provided upon deployment, Vertex AI will generate a value for this ID.

    This value should be 1-10 characters, and valid characters are /[0-9]/.

#id=

def id=(value) -> ::String
Parameter
  • value (::String) — Immutable. The ID of the DeployedModel. If not provided upon deployment, Vertex AI will generate a value for this ID.

    This value should be 1-10 characters, and valid characters are /[0-9]/.

Returns
  • (::String) — Immutable. The ID of the DeployedModel. If not provided upon deployment, Vertex AI will generate a value for this ID.

    This value should be 1-10 characters, and valid characters are /[0-9]/.

#model

def model() -> ::String
Returns
  • (::String) — Required. The resource name of the Model that this is the deployment of. Note that the Model may be in a different location than the DeployedModel's Endpoint.

    The resource name may contain version id or version alias to specify the version. Example: projects/{project}/locations/{location}/models/{model}@2 or projects/{project}/locations/{location}/models/{model}@golden if no version is specified, the default version will be deployed.

#model=

def model=(value) -> ::String
Parameter
  • value (::String) — Required. The resource name of the Model that this is the deployment of. Note that the Model may be in a different location than the DeployedModel's Endpoint.

    The resource name may contain version id or version alias to specify the version. Example: projects/{project}/locations/{location}/models/{model}@2 or projects/{project}/locations/{location}/models/{model}@golden if no version is specified, the default version will be deployed.

Returns
  • (::String) — Required. The resource name of the Model that this is the deployment of. Note that the Model may be in a different location than the DeployedModel's Endpoint.

    The resource name may contain version id or version alias to specify the version. Example: projects/{project}/locations/{location}/models/{model}@2 or projects/{project}/locations/{location}/models/{model}@golden if no version is specified, the default version will be deployed.

#model_version_id

def model_version_id() -> ::String
Returns
  • (::String) — Output only. The version ID of the model that is deployed.

#private_endpoints

def private_endpoints() -> ::Google::Cloud::AIPlatform::V1::PrivateEndpoints
Returns

#service_account

def service_account() -> ::String
Returns
  • (::String) — The service account that the DeployedModel's container runs as. Specify the email address of the service account. If this service account is not specified, the container runs as a service account that doesn't have access to the resource project.

    Users deploying the Model must have the iam.serviceAccounts.actAs permission on this service account.

#service_account=

def service_account=(value) -> ::String
Parameter
  • value (::String) — The service account that the DeployedModel's container runs as. Specify the email address of the service account. If this service account is not specified, the container runs as a service account that doesn't have access to the resource project.

    Users deploying the Model must have the iam.serviceAccounts.actAs permission on this service account.

Returns
  • (::String) — The service account that the DeployedModel's container runs as. Specify the email address of the service account. If this service account is not specified, the container runs as a service account that doesn't have access to the resource project.

    Users deploying the Model must have the iam.serviceAccounts.actAs permission on this service account.

#shared_resources

def shared_resources() -> ::String
Returns
  • (::String) — The resource name of the shared DeploymentResourcePool to deploy on. Format: projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}

#shared_resources=

def shared_resources=(value) -> ::String
Parameter
  • value (::String) — The resource name of the shared DeploymentResourcePool to deploy on. Format: projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}
Returns
  • (::String) — The resource name of the shared DeploymentResourcePool to deploy on. Format: projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}

#status

def status() -> ::Google::Cloud::AIPlatform::V1::DeployedModel::Status
Returns

#system_labels

def system_labels() -> ::Google::Protobuf::Map{::String => ::String}
Returns
  • (::Google::Protobuf::Map{::String => ::String}) — System labels to apply to Model Garden deployments. System labels are managed by Google for internal use only.

#system_labels=

def system_labels=(value) -> ::Google::Protobuf::Map{::String => ::String}
Parameter
  • value (::Google::Protobuf::Map{::String => ::String}) — System labels to apply to Model Garden deployments. System labels are managed by Google for internal use only.
Returns
  • (::Google::Protobuf::Map{::String => ::String}) — System labels to apply to Model Garden deployments. System labels are managed by Google for internal use only.