- 0.61.0 (latest)
- 0.60.0
- 0.59.0
- 0.58.0
- 0.57.0
- 0.56.0
- 0.55.0
- 0.54.0
- 0.53.0
- 0.52.0
- 0.51.0
- 0.50.0
- 0.49.0
- 0.48.0
- 0.47.0
- 0.46.0
- 0.45.0
- 0.44.0
- 0.43.0
- 0.42.0
- 0.41.0
- 0.40.0
- 0.39.0
- 0.38.0
- 0.37.0
- 0.36.0
- 0.35.0
- 0.34.0
- 0.33.0
- 0.32.0
- 0.31.0
- 0.30.0
- 0.29.0
- 0.28.0
- 0.27.0
- 0.26.0
- 0.25.0
- 0.24.0
- 0.23.0
- 0.22.0
- 0.21.0
- 0.20.0
- 0.19.0
- 0.18.0
- 0.17.0
- 0.16.0
- 0.15.0
- 0.14.0
- 0.13.0
- 0.12.0
- 0.11.0
- 0.10.0
- 0.9.1
- 0.8.0
- 0.7.0
- 0.6.0
- 0.5.0
- 0.4.0
- 0.3.0
- 0.2.0
- 0.1.0
Reference documentation and code samples for the Vertex AI V1 API class Google::Cloud::AIPlatform::V1::ModelDeploymentMonitoringJob.
Represents a job that runs periodically to monitor the deployed models in an endpoint. It will analyze the logged training & prediction data to detect any abnormal behaviors.
Inherits
- Object
Extended By
- Google::Protobuf::MessageExts::ClassMethods
Includes
- Google::Protobuf::MessageExts
Methods
#analysis_instance_schema_uri
def analysis_instance_schema_uri() -> ::String
-
(::String) — YAML schema file uri describing the format of a single instance that you
want Tensorflow Data Validation (TFDV) to analyze.
If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.
#analysis_instance_schema_uri=
def analysis_instance_schema_uri=(value) -> ::String
-
value (::String) — YAML schema file uri describing the format of a single instance that you
want Tensorflow Data Validation (TFDV) to analyze.
If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.
-
(::String) — YAML schema file uri describing the format of a single instance that you
want Tensorflow Data Validation (TFDV) to analyze.
If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.
#bigquery_tables
def bigquery_tables() -> ::Array<::Google::Cloud::AIPlatform::V1::ModelDeploymentMonitoringBigQueryTable>
-
(::Array<::Google::Cloud::AIPlatform::V1::ModelDeploymentMonitoringBigQueryTable>) —
Output only. The created bigquery tables for the job under customer project. Customer could do their own query & analysis. There could be 4 log tables in maximum:
- Training data logging predict request/response
- Serving data logging predict request/response
#create_time
def create_time() -> ::Google::Protobuf::Timestamp
- (::Google::Protobuf::Timestamp) — Output only. Timestamp when this ModelDeploymentMonitoringJob was created.
#display_name
def display_name() -> ::String
- (::String) — Required. The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.
#display_name=
def display_name=(value) -> ::String
- value (::String) — Required. The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.
- (::String) — Required. The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.
#enable_monitoring_pipeline_logs
def enable_monitoring_pipeline_logs() -> ::Boolean
- (::Boolean) — If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing.
#enable_monitoring_pipeline_logs=
def enable_monitoring_pipeline_logs=(value) -> ::Boolean
- value (::Boolean) — If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing.
- (::Boolean) — If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing.
#encryption_spec
def encryption_spec() -> ::Google::Cloud::AIPlatform::V1::EncryptionSpec
- (::Google::Cloud::AIPlatform::V1::EncryptionSpec) — Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.
#encryption_spec=
def encryption_spec=(value) -> ::Google::Cloud::AIPlatform::V1::EncryptionSpec
- value (::Google::Cloud::AIPlatform::V1::EncryptionSpec) — Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.
- (::Google::Cloud::AIPlatform::V1::EncryptionSpec) — Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.
#endpoint
def endpoint() -> ::String
-
(::String) — Required. Endpoint resource name.
Format:
projects/{project}/locations/{location}/endpoints/{endpoint}
#endpoint=
def endpoint=(value) -> ::String
-
value (::String) — Required. Endpoint resource name.
Format:
projects/{project}/locations/{location}/endpoints/{endpoint}
-
(::String) — Required. Endpoint resource name.
Format:
projects/{project}/locations/{location}/endpoints/{endpoint}
#error
def error() -> ::Google::Rpc::Status
-
(::Google::Rpc::Status) — Output only. Only populated when the job's state is
JOB_STATE_FAILED
orJOB_STATE_CANCELLED
.
#labels
def labels() -> ::Google::Protobuf::Map{::String => ::String}
-
(::Google::Protobuf::Map{::String => ::String}) — The labels with user-defined metadata to organize your
ModelDeploymentMonitoringJob.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information and examples of labels.
#labels=
def labels=(value) -> ::Google::Protobuf::Map{::String => ::String}
-
value (::Google::Protobuf::Map{::String => ::String}) — The labels with user-defined metadata to organize your
ModelDeploymentMonitoringJob.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information and examples of labels.
-
(::Google::Protobuf::Map{::String => ::String}) — The labels with user-defined metadata to organize your
ModelDeploymentMonitoringJob.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information and examples of labels.
#latest_monitoring_pipeline_metadata
def latest_monitoring_pipeline_metadata() -> ::Google::Cloud::AIPlatform::V1::ModelDeploymentMonitoringJob::LatestMonitoringPipelineMetadata
- (::Google::Cloud::AIPlatform::V1::ModelDeploymentMonitoringJob::LatestMonitoringPipelineMetadata) — Output only. Latest triggered monitoring pipeline metadata.
#log_ttl
def log_ttl() -> ::Google::Protobuf::Duration
- (::Google::Protobuf::Duration) — The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.
#log_ttl=
def log_ttl=(value) -> ::Google::Protobuf::Duration
- value (::Google::Protobuf::Duration) — The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.
- (::Google::Protobuf::Duration) — The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.
#logging_sampling_strategy
def logging_sampling_strategy() -> ::Google::Cloud::AIPlatform::V1::SamplingStrategy
- (::Google::Cloud::AIPlatform::V1::SamplingStrategy) — Required. Sample Strategy for logging.
#logging_sampling_strategy=
def logging_sampling_strategy=(value) -> ::Google::Cloud::AIPlatform::V1::SamplingStrategy
- value (::Google::Cloud::AIPlatform::V1::SamplingStrategy) — Required. Sample Strategy for logging.
- (::Google::Cloud::AIPlatform::V1::SamplingStrategy) — Required. Sample Strategy for logging.
#model_deployment_monitoring_objective_configs
def model_deployment_monitoring_objective_configs() -> ::Array<::Google::Cloud::AIPlatform::V1::ModelDeploymentMonitoringObjectiveConfig>
- (::Array<::Google::Cloud::AIPlatform::V1::ModelDeploymentMonitoringObjectiveConfig>) — Required. The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately.
#model_deployment_monitoring_objective_configs=
def model_deployment_monitoring_objective_configs=(value) -> ::Array<::Google::Cloud::AIPlatform::V1::ModelDeploymentMonitoringObjectiveConfig>
- value (::Array<::Google::Cloud::AIPlatform::V1::ModelDeploymentMonitoringObjectiveConfig>) — Required. The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately.
- (::Array<::Google::Cloud::AIPlatform::V1::ModelDeploymentMonitoringObjectiveConfig>) — Required. The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately.
#model_deployment_monitoring_schedule_config
def model_deployment_monitoring_schedule_config() -> ::Google::Cloud::AIPlatform::V1::ModelDeploymentMonitoringScheduleConfig
- (::Google::Cloud::AIPlatform::V1::ModelDeploymentMonitoringScheduleConfig) — Required. Schedule config for running the monitoring job.
#model_deployment_monitoring_schedule_config=
def model_deployment_monitoring_schedule_config=(value) -> ::Google::Cloud::AIPlatform::V1::ModelDeploymentMonitoringScheduleConfig
- value (::Google::Cloud::AIPlatform::V1::ModelDeploymentMonitoringScheduleConfig) — Required. Schedule config for running the monitoring job.
- (::Google::Cloud::AIPlatform::V1::ModelDeploymentMonitoringScheduleConfig) — Required. Schedule config for running the monitoring job.
#model_monitoring_alert_config
def model_monitoring_alert_config() -> ::Google::Cloud::AIPlatform::V1::ModelMonitoringAlertConfig
- (::Google::Cloud::AIPlatform::V1::ModelMonitoringAlertConfig) — Alert config for model monitoring.
#model_monitoring_alert_config=
def model_monitoring_alert_config=(value) -> ::Google::Cloud::AIPlatform::V1::ModelMonitoringAlertConfig
- value (::Google::Cloud::AIPlatform::V1::ModelMonitoringAlertConfig) — Alert config for model monitoring.
- (::Google::Cloud::AIPlatform::V1::ModelMonitoringAlertConfig) — Alert config for model monitoring.
#name
def name() -> ::String
- (::String) — Output only. Resource name of a ModelDeploymentMonitoringJob.
#next_schedule_time
def next_schedule_time() -> ::Google::Protobuf::Timestamp
- (::Google::Protobuf::Timestamp) — Output only. Timestamp when this monitoring pipeline will be scheduled to run for the next round.
#predict_instance_schema_uri
def predict_instance_schema_uri() -> ::String
- (::String) — YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
#predict_instance_schema_uri=
def predict_instance_schema_uri=(value) -> ::String
- value (::String) — YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
- (::String) — YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
#sample_predict_instance
def sample_predict_instance() -> ::Google::Protobuf::Value
- (::Google::Protobuf::Value) — Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
#sample_predict_instance=
def sample_predict_instance=(value) -> ::Google::Protobuf::Value
- value (::Google::Protobuf::Value) — Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
- (::Google::Protobuf::Value) — Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
#satisfies_pzi
def satisfies_pzi() -> ::Boolean
- (::Boolean) — Output only. Reserved for future use.
#satisfies_pzs
def satisfies_pzs() -> ::Boolean
- (::Boolean) — Output only. Reserved for future use.
#schedule_state
def schedule_state() -> ::Google::Cloud::AIPlatform::V1::ModelDeploymentMonitoringJob::MonitoringScheduleState
- (::Google::Cloud::AIPlatform::V1::ModelDeploymentMonitoringJob::MonitoringScheduleState) — Output only. Schedule state when the monitoring job is in Running state.
#state
def state() -> ::Google::Cloud::AIPlatform::V1::JobState
- (::Google::Cloud::AIPlatform::V1::JobState) — Output only. The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.
#stats_anomalies_base_directory
def stats_anomalies_base_directory() -> ::Google::Cloud::AIPlatform::V1::GcsDestination
- (::Google::Cloud::AIPlatform::V1::GcsDestination) — Stats anomalies base folder path.
#stats_anomalies_base_directory=
def stats_anomalies_base_directory=(value) -> ::Google::Cloud::AIPlatform::V1::GcsDestination
- value (::Google::Cloud::AIPlatform::V1::GcsDestination) — Stats anomalies base folder path.
- (::Google::Cloud::AIPlatform::V1::GcsDestination) — Stats anomalies base folder path.
#update_time
def update_time() -> ::Google::Protobuf::Timestamp
- (::Google::Protobuf::Timestamp) — Output only. Timestamp when this ModelDeploymentMonitoringJob was updated most recently.