Class ModelDeploymentMonitoringJob (1.17.1)

ModelDeploymentMonitoringJob(
    model_deployment_monitoring_job_name: str,
    project: Optional[str] = None,
    location: Optional[str] = None,
    credentials: Optional[google.auth.credentials.Credentials] = None,
)

Vertex AI Model Deployment Monitoring Job.

This class should be used in conjunction with the Endpoint class in order to configure model monitoring for deployed models.

Inheritance

builtins.object > google.cloud.aiplatform.base.VertexAiResourceNoun > builtins.object > google.cloud.aiplatform.base.FutureManager > google.cloud.aiplatform.base.VertexAiResourceNounWithFutureManager > builtins.object > abc.ABC > google.cloud.aiplatform.base.DoneMixin > google.cloud.aiplatform.base.StatefulResource > google.cloud.aiplatform.base.VertexAiStatefulResource > google.cloud.aiplatform.jobs._Job > ModelDeploymentMonitoringJob

Properties

end_time

Time when the Job resource entered the JOB_STATE_SUCCEEDED, JOB_STATE_FAILED, or JOB_STATE_CANCELLED state.

Methods

ModelDeploymentMonitoringJob

ModelDeploymentMonitoringJob(
    model_deployment_monitoring_job_name: str,
    project: Optional[str] = None,
    location: Optional[str] = None,
    credentials: Optional[google.auth.credentials.Credentials] = None,
)

Initializer for ModelDeploymentMonitoringJob.

Parameter
NameDescription
model_deployment_monitoring_job_name str

Required. A fully-qualified ModelDeploymentMonitoringJob resource name or ID. Example: "projects/.../locations/.../modelDeploymentMonitoringJobs/456" or "456" when project and location are initialized or passed.

cancel

cancel()

Cancels this Job.

Success of cancellation is not guaranteed. Use Job.state property to verify if cancellation was successful.

create

create(
    endpoint: Union[str, google.cloud.aiplatform.models.Endpoint],
    objective_configs: Optional[
        Union[
            google.cloud.aiplatform.model_monitoring.objective.ObjectiveConfig,
            Dict[
                str, google.cloud.aiplatform.model_monitoring.objective.ObjectiveConfig
            ],
        ]
    ] = None,
    logging_sampling_strategy: Optional[
        google.cloud.aiplatform.model_monitoring.sampling.RandomSampleConfig
    ] = None,
    schedule_config: Optional[
        google.cloud.aiplatform.model_monitoring.schedule.ScheduleConfig
    ] = None,
    display_name: Optional[str] = None,
    deployed_model_ids: Optional[List[str]] = None,
    alert_config: Optional[
        google.cloud.aiplatform.model_monitoring.alert.EmailAlertConfig
    ] = None,
    predict_instance_schema_uri: Optional[str] = None,
    sample_predict_instance: Optional[str] = None,
    analysis_instance_schema_uri: Optional[str] = None,
    bigquery_tables_log_ttl: Optional[int] = None,
    stats_anomalies_base_directory: Optional[str] = None,
    enable_monitoring_pipeline_logs: Optional[bool] = None,
    labels: Optional[Dict[str, str]] = None,
    encryption_spec_key_name: Optional[str] = None,
    project: Optional[str] = None,
    location: Optional[str] = None,
    credentials: Optional[google.auth.credentials.Credentials] = None,
    create_request_timeout: Optional[float] = None,
)

Creates and launches a model monitoring job.

Parameters
NameDescription
endpoint Union[str, "aiplatform.Endpoint"]

Required. Endpoint resource name or an instance of aiplatform.Endpoint. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

logging_sampling_strategy model_monitoring.sampling.RandomSampleConfig

Optional. Sample Strategy for logging.

schedule_config model_monitoring.schedule.ScheduleConfig

Optional. Configures model monitoring job scheduling interval in hours. This defines how often the monitoring jobs are triggered.

display_name str

Optional. The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can be consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.

deployed_model_ids List[str]

Optional. Use this argument to specify which deployed models to apply the objective config to. If left unspecified, the same config will be applied to all deployed models.

alert_config model_monitoring.alert.EmailAlertConfig

Optional. Configures how alerts are sent to the user. Right now only email alert is supported.

predict_instance_schema_uri str

Optional. YAML schema file uri describing the format of a single instance, which are given to format the Endpoint's prediction (and explanation). If not set, the schema will be generated from collected predict requests.

sample_predict_instance str

Optional. Sample Predict instance, same format as PredictionRequest.instances, this can be set as a replacement of predict_instance_schema_uri If not set, the schema will be generated from collected predict requests.

analysis_instance_schema_uri str

Optional. YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze. If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.

bigquery_tables_log_ttl int

Optional. The TTL(time to live) of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.

stats_anomalies_base_directory str

Optional. Stats anomalies base folder path.

enable_monitoring_pipeline_logs bool

Optional. If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing <https://cloud.google.com/logging#pricing>__.

labels Dict[str, str]

Optional. The labels with user-defined metadata to organize the ModelDeploymentMonitoringJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.

encryption_spec_key_name str

Optional. Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.

create_request_timeout int

Optional. Timeout in seconds for the model monitoring job creation request.

delete

delete()

Deletes an MDM job.

pause

pause()

Pause a running MDM job.

resume

resume()

Resumes a paused MDM job.

update

update(
    *,
    display_name: Optional[str] = None,
    schedule_config: Optional[
        google.cloud.aiplatform.model_monitoring.schedule.ScheduleConfig
    ] = None,
    alert_config: Optional[
        google.cloud.aiplatform.model_monitoring.alert.EmailAlertConfig
    ] = None,
    logging_sampling_strategy: Optional[
        google.cloud.aiplatform.model_monitoring.sampling.RandomSampleConfig
    ] = None,
    labels: Optional[Dict[str, str]] = None,
    bigquery_tables_log_ttl: Optional[int] = None,
    enable_monitoring_pipeline_logs: Optional[bool] = None,
    objective_configs: Optional[
        Union[
            google.cloud.aiplatform.model_monitoring.objective.ObjectiveConfig,
            Dict[
                str, google.cloud.aiplatform.model_monitoring.objective.ObjectiveConfig
            ],
        ]
    ] = None,
    deployed_model_ids: Optional[List[str]] = None
)

Updates an existing ModelDeploymentMonitoringJob.

Parameters
NameDescription
display_name str

Optional. The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can be consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.

schedule_config model_monitoring.schedule.ScheduleConfig

Required. Configures model monitoring job scheduling interval in hours. This defines how often the monitoring jobs are triggered.

alert_config model_monitoring.alert.EmailAlertConfig

Optional. Configures how alerts are sent to the user. Right now only email alert is supported.

logging_sampling_strategy model_monitoring.sampling.RandomSampleConfig

Required. Sample Strategy for logging.

labels Dict[str, str]

Optional. The labels with user-defined metadata to organize the ModelDeploymentMonitoringJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.

bigquery_tables_log_ttl int

Optional. The TTL(time to live) of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.

enable_monitoring_pipeline_logs bool

Optional. If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing <https://cloud.google.com/logging#pricing>__.

deployed_model_ids List[str]

Optional. Use this argument to specify which deployed models to apply the updated objective config to. If left unspecified, the same config will be applied to all deployed models.