Vertex AI V1 API - Class Google::Cloud::AIPlatform::V1::ModelDeploymentMonitoringJob (v0.3.0)

Reference documentation and code samples for the Vertex AI V1 API class Google::Cloud::AIPlatform::V1::ModelDeploymentMonitoringJob.

Represents a job that runs periodically to monitor the deployed models in an endpoint. It will analyze the logged training & prediction data to detect any abnormal behaviors.

Inherits

  • Object

Extended By

  • Google::Protobuf::MessageExts::ClassMethods

Includes

  • Google::Protobuf::MessageExts

Methods

#analysis_instance_schema_uri

def analysis_instance_schema_uri() -> ::String
Returns
  • (::String) — YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze.

    If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.

#analysis_instance_schema_uri=

def analysis_instance_schema_uri=(value) -> ::String
Parameter
  • value (::String) — YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze.

    If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.

Returns
  • (::String) — YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze.

    If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.

#bigquery_tables

def bigquery_tables() -> ::Array<::Google::Cloud::AIPlatform::V1::ModelDeploymentMonitoringBigQueryTable>
Returns

#create_time

def create_time() -> ::Google::Protobuf::Timestamp
Returns

#display_name

def display_name() -> ::String
Returns
  • (::String) — Required. The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can be consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.

#display_name=

def display_name=(value) -> ::String
Parameter
  • value (::String) — Required. The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can be consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.
Returns
  • (::String) — Required. The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can be consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.

#enable_monitoring_pipeline_logs

def enable_monitoring_pipeline_logs() -> ::Boolean
Returns
  • (::Boolean) — If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing.

#enable_monitoring_pipeline_logs=

def enable_monitoring_pipeline_logs=(value) -> ::Boolean
Parameter
  • value (::Boolean) — If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing.
Returns
  • (::Boolean) — If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing.

#encryption_spec

def encryption_spec() -> ::Google::Cloud::AIPlatform::V1::EncryptionSpec
Returns
  • (::Google::Cloud::AIPlatform::V1::EncryptionSpec) — Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.

#encryption_spec=

def encryption_spec=(value) -> ::Google::Cloud::AIPlatform::V1::EncryptionSpec
Parameter
  • value (::Google::Cloud::AIPlatform::V1::EncryptionSpec) — Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.
Returns
  • (::Google::Cloud::AIPlatform::V1::EncryptionSpec) — Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.

#endpoint

def endpoint() -> ::String
Returns
  • (::String) — Required. Endpoint resource name. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

#endpoint=

def endpoint=(value) -> ::String
Parameter
  • value (::String) — Required. Endpoint resource name. Format: projects/{project}/locations/{location}/endpoints/{endpoint}
Returns
  • (::String) — Required. Endpoint resource name. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

#error

def error() -> ::Google::Rpc::Status
Returns
  • (::Google::Rpc::Status) — Output only. Only populated when the job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.

#labels

def labels() -> ::Google::Protobuf::Map{::String => ::String}
Returns
  • (::Google::Protobuf::Map{::String => ::String}) — The labels with user-defined metadata to organize your ModelDeploymentMonitoringJob.

    Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.

    See https://goo.gl/xmQnxf for more information and examples of labels.

#labels=

def labels=(value) -> ::Google::Protobuf::Map{::String => ::String}
Parameter
  • value (::Google::Protobuf::Map{::String => ::String}) — The labels with user-defined metadata to organize your ModelDeploymentMonitoringJob.

    Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.

    See https://goo.gl/xmQnxf for more information and examples of labels.

Returns
  • (::Google::Protobuf::Map{::String => ::String}) — The labels with user-defined metadata to organize your ModelDeploymentMonitoringJob.

    Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.

    See https://goo.gl/xmQnxf for more information and examples of labels.

#latest_monitoring_pipeline_metadata

def latest_monitoring_pipeline_metadata() -> ::Google::Cloud::AIPlatform::V1::ModelDeploymentMonitoringJob::LatestMonitoringPipelineMetadata
Returns

#log_ttl

def log_ttl() -> ::Google::Protobuf::Duration
Returns
  • (::Google::Protobuf::Duration) — The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.

#log_ttl=

def log_ttl=(value) -> ::Google::Protobuf::Duration
Parameter
  • value (::Google::Protobuf::Duration) — The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.
Returns
  • (::Google::Protobuf::Duration) — The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.

#logging_sampling_strategy

def logging_sampling_strategy() -> ::Google::Cloud::AIPlatform::V1::SamplingStrategy
Returns

#logging_sampling_strategy=

def logging_sampling_strategy=(value) -> ::Google::Cloud::AIPlatform::V1::SamplingStrategy
Parameter
Returns

#model_deployment_monitoring_objective_configs

def model_deployment_monitoring_objective_configs() -> ::Array<::Google::Cloud::AIPlatform::V1::ModelDeploymentMonitoringObjectiveConfig>
Returns

#model_deployment_monitoring_objective_configs=

def model_deployment_monitoring_objective_configs=(value) -> ::Array<::Google::Cloud::AIPlatform::V1::ModelDeploymentMonitoringObjectiveConfig>
Parameter
Returns

#model_deployment_monitoring_schedule_config

def model_deployment_monitoring_schedule_config() -> ::Google::Cloud::AIPlatform::V1::ModelDeploymentMonitoringScheduleConfig
Returns

#model_deployment_monitoring_schedule_config=

def model_deployment_monitoring_schedule_config=(value) -> ::Google::Cloud::AIPlatform::V1::ModelDeploymentMonitoringScheduleConfig
Parameter
Returns

#model_monitoring_alert_config

def model_monitoring_alert_config() -> ::Google::Cloud::AIPlatform::V1::ModelMonitoringAlertConfig
Returns

#model_monitoring_alert_config=

def model_monitoring_alert_config=(value) -> ::Google::Cloud::AIPlatform::V1::ModelMonitoringAlertConfig
Parameter
Returns

#name

def name() -> ::String
Returns
  • (::String) — Output only. Resource name of a ModelDeploymentMonitoringJob.

#next_schedule_time

def next_schedule_time() -> ::Google::Protobuf::Timestamp
Returns

#predict_instance_schema_uri

def predict_instance_schema_uri() -> ::String
Returns
  • (::String) — YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.

#predict_instance_schema_uri=

def predict_instance_schema_uri=(value) -> ::String
Parameter
  • value (::String) — YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
Returns
  • (::String) — YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.

#sample_predict_instance

def sample_predict_instance() -> ::Google::Protobuf::Value
Returns

#sample_predict_instance=

def sample_predict_instance=(value) -> ::Google::Protobuf::Value
Parameter
Returns

#schedule_state

def schedule_state() -> ::Google::Cloud::AIPlatform::V1::ModelDeploymentMonitoringJob::MonitoringScheduleState
Returns

#state

def state() -> ::Google::Cloud::AIPlatform::V1::JobState
Returns
  • (::Google::Cloud::AIPlatform::V1::JobState) — Output only. The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.

#stats_anomalies_base_directory

def stats_anomalies_base_directory() -> ::Google::Cloud::AIPlatform::V1::GcsDestination
Returns

#stats_anomalies_base_directory=

def stats_anomalies_base_directory=(value) -> ::Google::Cloud::AIPlatform::V1::GcsDestination
Parameter
Returns

#update_time

def update_time() -> ::Google::Protobuf::Timestamp
Returns