- 1.72.0 (latest)
- 1.71.1
- 1.70.0
- 1.69.0
- 1.68.0
- 1.67.1
- 1.66.0
- 1.65.0
- 1.63.0
- 1.62.0
- 1.60.0
- 1.59.0
- 1.58.0
- 1.57.0
- 1.56.0
- 1.55.0
- 1.54.1
- 1.53.0
- 1.52.0
- 1.51.0
- 1.50.0
- 1.49.0
- 1.48.0
- 1.47.0
- 1.46.0
- 1.45.0
- 1.44.0
- 1.43.0
- 1.39.0
- 1.38.1
- 1.37.0
- 1.36.4
- 1.35.0
- 1.34.0
- 1.33.1
- 1.32.0
- 1.31.1
- 1.30.1
- 1.29.0
- 1.28.1
- 1.27.1
- 1.26.1
- 1.25.0
- 1.24.1
- 1.23.0
- 1.22.1
- 1.21.0
- 1.20.0
- 1.19.1
- 1.18.3
- 1.17.1
- 1.16.1
- 1.15.1
- 1.14.0
- 1.13.1
- 1.12.1
- 1.11.0
- 1.10.0
- 1.9.0
- 1.8.1
- 1.7.1
- 1.6.2
- 1.5.0
- 1.4.3
- 1.3.0
- 1.2.0
- 1.1.1
- 1.0.1
- 0.9.0
- 0.8.0
- 0.7.1
- 0.6.0
- 0.5.1
- 0.4.0
- 0.3.1
JobServiceAsyncClient(*, credentials: Optional[google.auth.credentials.Credentials] = None, transport: Union[str, google.cloud.aiplatform_v1beta1.services.job_service.transports.base.JobServiceTransport] = 'grpc_asyncio', client_options: Optional[google.api_core.client_options.ClientOptions] = None, client_info: google.api_core.gapic_v1.client_info.ClientInfo = <google.api_core.gapic_v1.client_info.ClientInfo object>)
A service for creating and managing Vertex AI's jobs.
Inheritance
builtins.object > JobServiceAsyncClientProperties
transport
Returns the transport used by the client instance.
Type | Description |
JobServiceTransport | The transport used by the client instance. |
Methods
JobServiceAsyncClient
JobServiceAsyncClient(*, credentials: Optional[google.auth.credentials.Credentials] = None, transport: Union[str, google.cloud.aiplatform_v1beta1.services.job_service.transports.base.JobServiceTransport] = 'grpc_asyncio', client_options: Optional[google.api_core.client_options.ClientOptions] = None, client_info: google.api_core.gapic_v1.client_info.ClientInfo = <google.api_core.gapic_v1.client_info.ClientInfo object>)
Instantiates the job service client.
Name | Description |
credentials |
Optional[google.auth.credentials.Credentials]
The authorization credentials to attach to requests. These credentials identify the application to the service; if none are specified, the client will attempt to ascertain the credentials from the environment. |
transport |
Union[str, `.JobServiceTransport`]
The transport to use. If set to None, a transport is chosen automatically. |
client_options |
ClientOptions
Custom options for the client. It won't take effect if a |
Type | Description |
google.auth.exceptions.MutualTlsChannelError | If mutual TLS transport creation failed for any reason. |
batch_prediction_job_path
batch_prediction_job_path(project: str, location: str, batch_prediction_job: str)
Returns a fully-qualified batch_prediction_job string.
cancel_batch_prediction_job
cancel_batch_prediction_job(request: Optional[Union[google.cloud.aiplatform_v1beta1.types.job_service.CancelBatchPredictionJobRequest, dict]] = None, *, name: Optional[str] = None, retry: Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault] = <_MethodDefault._DEFAULT_VALUE: <object object>>, timeout: Optional[float] = None, metadata: Sequence[Tuple[str, str]] = ())
Cancels a BatchPredictionJob.
Starts asynchronous cancellation on the BatchPredictionJob. The
server makes the best effort to cancel the job, but success is
not guaranteed. Clients can use
xref_JobService.GetBatchPredictionJob
or other methods to check whether the cancellation succeeded or
whether the job completed despite cancellation. On a successful
cancellation, the BatchPredictionJob is not deleted;instead its
xref_BatchPredictionJob.state
is set to CANCELLED
. Any files already outputted by the job
are not deleted.
from google.cloud import aiplatform_v1beta1
def sample_cancel_batch_prediction_job():
# Create a client
client = aiplatform_v1beta1.JobServiceClient()
# Initialize request argument(s)
request = aiplatform_v1beta1.CancelBatchPredictionJobRequest(
name="name_value",
)
# Make the request
client.cancel_batch_prediction_job(request=request)
Name | Description |
request |
Union[google.cloud.aiplatform_v1beta1.types.CancelBatchPredictionJobRequest, dict]
The request object. Request message for JobService.CancelBatchPredictionJob. |
name |
`str`
Required. The name of the BatchPredictionJob to cancel. Format: |
retry |
google.api_core.retry.Retry
Designation of what errors, if any, should be retried. |
timeout |
float
The timeout for this request. |
metadata |
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
cancel_custom_job
cancel_custom_job(request: Optional[Union[google.cloud.aiplatform_v1beta1.types.job_service.CancelCustomJobRequest, dict]] = None, *, name: Optional[str] = None, retry: Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault] = <_MethodDefault._DEFAULT_VALUE: <object object>>, timeout: Optional[float] = None, metadata: Sequence[Tuple[str, str]] = ())
Cancels a CustomJob. Starts asynchronous cancellation on the
CustomJob. The server makes a best effort to cancel the job, but
success is not guaranteed. Clients can use
xref_JobService.GetCustomJob
or other methods to check whether the cancellation succeeded or
whether the job completed despite cancellation. On successful
cancellation, the CustomJob is not deleted; instead it becomes a
job with a
xref_CustomJob.error
value with a google.rpc.Status.code][google.rpc.Status.code]
of
1, corresponding to Code.CANCELLED
, and
xref_CustomJob.state
is set to CANCELLED
.
from google.cloud import aiplatform_v1beta1
def sample_cancel_custom_job():
# Create a client
client = aiplatform_v1beta1.JobServiceClient()
# Initialize request argument(s)
request = aiplatform_v1beta1.CancelCustomJobRequest(
name="name_value",
)
# Make the request
client.cancel_custom_job(request=request)
Name | Description |
request |
Union[google.cloud.aiplatform_v1beta1.types.CancelCustomJobRequest, dict]
The request object. Request message for JobService.CancelCustomJob. |
name |
`str`
Required. The name of the CustomJob to cancel. Format: |
retry |
google.api_core.retry.Retry
Designation of what errors, if any, should be retried. |
timeout |
float
The timeout for this request. |
metadata |
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
cancel_data_labeling_job
cancel_data_labeling_job(request: Optional[Union[google.cloud.aiplatform_v1beta1.types.job_service.CancelDataLabelingJobRequest, dict]] = None, *, name: Optional[str] = None, retry: Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault] = <_MethodDefault._DEFAULT_VALUE: <object object>>, timeout: Optional[float] = None, metadata: Sequence[Tuple[str, str]] = ())
Cancels a DataLabelingJob. Success of cancellation is not guaranteed.
from google.cloud import aiplatform_v1beta1
def sample_cancel_data_labeling_job():
# Create a client
client = aiplatform_v1beta1.JobServiceClient()
# Initialize request argument(s)
request = aiplatform_v1beta1.CancelDataLabelingJobRequest(
name="name_value",
)
# Make the request
client.cancel_data_labeling_job(request=request)
Name | Description |
request |
Union[google.cloud.aiplatform_v1beta1.types.CancelDataLabelingJobRequest, dict]
The request object. Request message for JobService.CancelDataLabelingJob. |
name |
`str`
Required. The name of the DataLabelingJob. Format: |
retry |
google.api_core.retry.Retry
Designation of what errors, if any, should be retried. |
timeout |
float
The timeout for this request. |
metadata |
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
cancel_hyperparameter_tuning_job
cancel_hyperparameter_tuning_job(request: Optional[Union[google.cloud.aiplatform_v1beta1.types.job_service.CancelHyperparameterTuningJobRequest, dict]] = None, *, name: Optional[str] = None, retry: Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault] = <_MethodDefault._DEFAULT_VALUE: <object object>>, timeout: Optional[float] = None, metadata: Sequence[Tuple[str, str]] = ())
Cancels a HyperparameterTuningJob. Starts asynchronous
cancellation on the HyperparameterTuningJob. The server makes a
best effort to cancel the job, but success is not guaranteed.
Clients can use
xref_JobService.GetHyperparameterTuningJob
or other methods to check whether the cancellation succeeded or
whether the job completed despite cancellation. On successful
cancellation, the HyperparameterTuningJob is not deleted;
instead it becomes a job with a
xref_HyperparameterTuningJob.error
value with a google.rpc.Status.code][google.rpc.Status.code]
of
1, corresponding to Code.CANCELLED
, and
xref_HyperparameterTuningJob.state
is set to CANCELLED
.
from google.cloud import aiplatform_v1beta1
def sample_cancel_hyperparameter_tuning_job():
# Create a client
client = aiplatform_v1beta1.JobServiceClient()
# Initialize request argument(s)
request = aiplatform_v1beta1.CancelHyperparameterTuningJobRequest(
name="name_value",
)
# Make the request
client.cancel_hyperparameter_tuning_job(request=request)
Name | Description |
request |
Union[google.cloud.aiplatform_v1beta1.types.CancelHyperparameterTuningJobRequest, dict]
The request object. Request message for JobService.CancelHyperparameterTuningJob. |
name |
`str`
Required. The name of the HyperparameterTuningJob to cancel. Format: |
retry |
google.api_core.retry.Retry
Designation of what errors, if any, should be retried. |
timeout |
float
The timeout for this request. |
metadata |
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
common_billing_account_path
common_billing_account_path(billing_account: str)
Returns a fully-qualified billing_account string.
common_folder_path
common_folder_path(folder: str)
Returns a fully-qualified folder string.
common_location_path
common_location_path(project: str, location: str)
Returns a fully-qualified location string.
common_organization_path
common_organization_path(organization: str)
Returns a fully-qualified organization string.
common_project_path
common_project_path(project: str)
Returns a fully-qualified project string.
create_batch_prediction_job
create_batch_prediction_job(request: Optional[Union[google.cloud.aiplatform_v1beta1.types.job_service.CreateBatchPredictionJobRequest, dict]] = None, *, parent: Optional[str] = None, batch_prediction_job: Optional[google.cloud.aiplatform_v1beta1.types.batch_prediction_job.BatchPredictionJob] = None, retry: Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault] = <_MethodDefault._DEFAULT_VALUE: <object object>>, timeout: Optional[float] = None, metadata: Sequence[Tuple[str, str]] = ())
Creates a BatchPredictionJob. A BatchPredictionJob once created will right away be attempted to start.
from google.cloud import aiplatform_v1beta1
def sample_create_batch_prediction_job():
# Create a client
client = aiplatform_v1beta1.JobServiceClient()
# Initialize request argument(s)
batch_prediction_job = aiplatform_v1beta1.BatchPredictionJob()
batch_prediction_job.display_name = "display_name_value"
batch_prediction_job.input_config.gcs_source.uris = ['uris_value_1', 'uris_value_2']
batch_prediction_job.input_config.instances_format = "instances_format_value"
batch_prediction_job.output_config.gcs_destination.output_uri_prefix = "output_uri_prefix_value"
batch_prediction_job.output_config.predictions_format = "predictions_format_value"
request = aiplatform_v1beta1.CreateBatchPredictionJobRequest(
parent="parent_value",
batch_prediction_job=batch_prediction_job,
)
# Make the request
response = client.create_batch_prediction_job(request=request)
# Handle the response
print(response)
Name | Description |
request |
Union[google.cloud.aiplatform_v1beta1.types.CreateBatchPredictionJobRequest, dict]
The request object. Request message for JobService.CreateBatchPredictionJob. |
parent |
`str`
Required. The resource name of the Location to create the BatchPredictionJob in. Format: |
batch_prediction_job |
BatchPredictionJob
Required. The BatchPredictionJob to create. This corresponds to the |
retry |
google.api_core.retry.Retry
Designation of what errors, if any, should be retried. |
timeout |
float
The timeout for this request. |
metadata |
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
Type | Description |
google.cloud.aiplatform_v1beta1.types.BatchPredictionJob | A job that uses a Model to produce predictions on multiple [input instances][google.cloud.aiplatform.v1beta1.BatchPredictionJob.input_config]. If predictions for significant portion of the instances fail, the job may finish without attempting predictions for all remaining instances. |
create_custom_job
create_custom_job(request: Optional[Union[google.cloud.aiplatform_v1beta1.types.job_service.CreateCustomJobRequest, dict]] = None, *, parent: Optional[str] = None, custom_job: Optional[google.cloud.aiplatform_v1beta1.types.custom_job.CustomJob] = None, retry: Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault] = <_MethodDefault._DEFAULT_VALUE: <object object>>, timeout: Optional[float] = None, metadata: Sequence[Tuple[str, str]] = ())
Creates a CustomJob. A created CustomJob right away will be attempted to be run.
from google.cloud import aiplatform_v1beta1
def sample_create_custom_job():
# Create a client
client = aiplatform_v1beta1.JobServiceClient()
# Initialize request argument(s)
custom_job = aiplatform_v1beta1.CustomJob()
custom_job.display_name = "display_name_value"
custom_job.job_spec.worker_pool_specs.container_spec.image_uri = "image_uri_value"
request = aiplatform_v1beta1.CreateCustomJobRequest(
parent="parent_value",
custom_job=custom_job,
)
# Make the request
response = client.create_custom_job(request=request)
# Handle the response
print(response)
Name | Description |
request |
Union[google.cloud.aiplatform_v1beta1.types.CreateCustomJobRequest, dict]
The request object. Request message for JobService.CreateCustomJob. |
parent |
`str`
Required. The resource name of the Location to create the CustomJob in. Format: |
custom_job |
CustomJob
Required. The CustomJob to create. This corresponds to the |
retry |
google.api_core.retry.Retry
Designation of what errors, if any, should be retried. |
timeout |
float
The timeout for this request. |
metadata |
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
Type | Description |
google.cloud.aiplatform_v1beta1.types.CustomJob | Represents a job that runs custom workloads such as a Docker container or a Python package. A CustomJob can have multiple worker pools and each worker pool can have its own machine and input spec. A CustomJob will be cleaned up once the job enters terminal state (failed or succeeded). |
create_data_labeling_job
create_data_labeling_job(request: Optional[Union[google.cloud.aiplatform_v1beta1.types.job_service.CreateDataLabelingJobRequest, dict]] = None, *, parent: Optional[str] = None, data_labeling_job: Optional[google.cloud.aiplatform_v1beta1.types.data_labeling_job.DataLabelingJob] = None, retry: Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault] = <_MethodDefault._DEFAULT_VALUE: <object object>>, timeout: Optional[float] = None, metadata: Sequence[Tuple[str, str]] = ())
Creates a DataLabelingJob.
from google.cloud import aiplatform_v1beta1
def sample_create_data_labeling_job():
# Create a client
client = aiplatform_v1beta1.JobServiceClient()
# Initialize request argument(s)
data_labeling_job = aiplatform_v1beta1.DataLabelingJob()
data_labeling_job.display_name = "display_name_value"
data_labeling_job.datasets = ['datasets_value_1', 'datasets_value_2']
data_labeling_job.labeler_count = 1375
data_labeling_job.instruction_uri = "instruction_uri_value"
data_labeling_job.inputs_schema_uri = "inputs_schema_uri_value"
data_labeling_job.inputs.null_value = "NULL_VALUE"
request = aiplatform_v1beta1.CreateDataLabelingJobRequest(
parent="parent_value",
data_labeling_job=data_labeling_job,
)
# Make the request
response = client.create_data_labeling_job(request=request)
# Handle the response
print(response)
Name | Description |
request |
Union[google.cloud.aiplatform_v1beta1.types.CreateDataLabelingJobRequest, dict]
The request object. Request message for JobService.CreateDataLabelingJob. |
parent |
`str`
Required. The parent of the DataLabelingJob. Format: |
data_labeling_job |
DataLabelingJob
Required. The DataLabelingJob to create. This corresponds to the |
retry |
google.api_core.retry.Retry
Designation of what errors, if any, should be retried. |
timeout |
float
The timeout for this request. |
metadata |
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
Type | Description |
google.cloud.aiplatform_v1beta1.types.DataLabelingJob | DataLabelingJob is used to trigger a human labeling job on unlabeled data from the following Dataset: |
create_hyperparameter_tuning_job
create_hyperparameter_tuning_job(request: Optional[Union[google.cloud.aiplatform_v1beta1.types.job_service.CreateHyperparameterTuningJobRequest, dict]] = None, *, parent: Optional[str] = None, hyperparameter_tuning_job: Optional[google.cloud.aiplatform_v1beta1.types.hyperparameter_tuning_job.HyperparameterTuningJob] = None, retry: Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault] = <_MethodDefault._DEFAULT_VALUE: <object object>>, timeout: Optional[float] = None, metadata: Sequence[Tuple[str, str]] = ())
Creates a HyperparameterTuningJob
from google.cloud import aiplatform_v1beta1
def sample_create_hyperparameter_tuning_job():
# Create a client
client = aiplatform_v1beta1.JobServiceClient()
# Initialize request argument(s)
hyperparameter_tuning_job = aiplatform_v1beta1.HyperparameterTuningJob()
hyperparameter_tuning_job.display_name = "display_name_value"
hyperparameter_tuning_job.study_spec.metrics.metric_id = "metric_id_value"
hyperparameter_tuning_job.study_spec.metrics.goal = "MINIMIZE"
hyperparameter_tuning_job.study_spec.parameters.double_value_spec.min_value = 0.96
hyperparameter_tuning_job.study_spec.parameters.double_value_spec.max_value = 0.962
hyperparameter_tuning_job.study_spec.parameters.parameter_id = "parameter_id_value"
hyperparameter_tuning_job.max_trial_count = 1609
hyperparameter_tuning_job.parallel_trial_count = 2128
hyperparameter_tuning_job.trial_job_spec.worker_pool_specs.container_spec.image_uri = "image_uri_value"
request = aiplatform_v1beta1.CreateHyperparameterTuningJobRequest(
parent="parent_value",
hyperparameter_tuning_job=hyperparameter_tuning_job,
)
# Make the request
response = client.create_hyperparameter_tuning_job(request=request)
# Handle the response
print(response)
Name | Description |
request |
Union[google.cloud.aiplatform_v1beta1.types.CreateHyperparameterTuningJobRequest, dict]
The request object. Request message for JobService.CreateHyperparameterTuningJob. |
parent |
`str`
Required. The resource name of the Location to create the HyperparameterTuningJob in. Format: |
hyperparameter_tuning_job |
HyperparameterTuningJob
Required. The HyperparameterTuningJob to create. This corresponds to the |
retry |
google.api_core.retry.Retry
Designation of what errors, if any, should be retried. |
timeout |
float
The timeout for this request. |
metadata |
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
Type | Description |
google.cloud.aiplatform_v1beta1.types.HyperparameterTuningJob | Represents a HyperparameterTuningJob. A HyperparameterTuningJob has a Study specification and multiple CustomJobs with identical CustomJob specification. |
create_model_deployment_monitoring_job
create_model_deployment_monitoring_job(request: Optional[Union[google.cloud.aiplatform_v1beta1.types.job_service.CreateModelDeploymentMonitoringJobRequest, dict]] = None, *, parent: Optional[str] = None, model_deployment_monitoring_job: Optional[google.cloud.aiplatform_v1beta1.types.model_deployment_monitoring_job.ModelDeploymentMonitoringJob] = None, retry: Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault] = <_MethodDefault._DEFAULT_VALUE: <object object>>, timeout: Optional[float] = None, metadata: Sequence[Tuple[str, str]] = ())
Creates a ModelDeploymentMonitoringJob. It will run periodically on a configured interval.
from google.cloud import aiplatform_v1beta1
def sample_create_model_deployment_monitoring_job():
# Create a client
client = aiplatform_v1beta1.JobServiceClient()
# Initialize request argument(s)
model_deployment_monitoring_job = aiplatform_v1beta1.ModelDeploymentMonitoringJob()
model_deployment_monitoring_job.display_name = "display_name_value"
model_deployment_monitoring_job.endpoint = "endpoint_value"
request = aiplatform_v1beta1.CreateModelDeploymentMonitoringJobRequest(
parent="parent_value",
model_deployment_monitoring_job=model_deployment_monitoring_job,
)
# Make the request
response = client.create_model_deployment_monitoring_job(request=request)
# Handle the response
print(response)
Name | Description |
request |
Union[google.cloud.aiplatform_v1beta1.types.CreateModelDeploymentMonitoringJobRequest, dict]
The request object. Request message for JobService.CreateModelDeploymentMonitoringJob. |
parent |
`str`
Required. The parent of the ModelDeploymentMonitoringJob. Format: |
model_deployment_monitoring_job |
ModelDeploymentMonitoringJob
Required. The ModelDeploymentMonitoringJob to create This corresponds to the |
retry |
google.api_core.retry.Retry
Designation of what errors, if any, should be retried. |
timeout |
float
The timeout for this request. |
metadata |
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
Type | Description |
google.cloud.aiplatform_v1beta1.types.ModelDeploymentMonitoringJob | Represents a job that runs periodically to monitor the deployed models in an endpoint. It will analyze the logged training & prediction data to detect any abnormal behaviors. |
custom_job_path
custom_job_path(project: str, location: str, custom_job: str)
Returns a fully-qualified custom_job string.
data_labeling_job_path
data_labeling_job_path(project: str, location: str, data_labeling_job: str)
Returns a fully-qualified data_labeling_job string.
dataset_path
dataset_path(project: str, location: str, dataset: str)
Returns a fully-qualified dataset string.
delete_batch_prediction_job
delete_batch_prediction_job(request: Optional[Union[google.cloud.aiplatform_v1beta1.types.job_service.DeleteBatchPredictionJobRequest, dict]] = None, *, name: Optional[str] = None, retry: Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault] = <_MethodDefault._DEFAULT_VALUE: <object object>>, timeout: Optional[float] = None, metadata: Sequence[Tuple[str, str]] = ())
Deletes a BatchPredictionJob. Can only be called on jobs that already finished.
from google.cloud import aiplatform_v1beta1
def sample_delete_batch_prediction_job():
# Create a client
client = aiplatform_v1beta1.JobServiceClient()
# Initialize request argument(s)
request = aiplatform_v1beta1.DeleteBatchPredictionJobRequest(
name="name_value",
)
# Make the request
operation = client.delete_batch_prediction_job(request=request)
print("Waiting for operation to complete...")
response = operation.result()
# Handle the response
print(response)
Name | Description |
request |
Union[google.cloud.aiplatform_v1beta1.types.DeleteBatchPredictionJobRequest, dict]
The request object. Request message for JobService.DeleteBatchPredictionJob. |
name |
`str`
Required. The name of the BatchPredictionJob resource to be deleted. Format: |
retry |
google.api_core.retry.Retry
Designation of what errors, if any, should be retried. |
timeout |
float
The timeout for this request. |
metadata |
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
Type | Description |
google.api_core.operation_async.AsyncOperation | An object representing a long-running operation. The result type for the operation will be `google.protobuf.empty_pb2.Empty` A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); } The JSON representation for Empty is empty JSON object {}. |
delete_custom_job
delete_custom_job(request: Optional[Union[google.cloud.aiplatform_v1beta1.types.job_service.DeleteCustomJobRequest, dict]] = None, *, name: Optional[str] = None, retry: Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault] = <_MethodDefault._DEFAULT_VALUE: <object object>>, timeout: Optional[float] = None, metadata: Sequence[Tuple[str, str]] = ())
Deletes a CustomJob.
from google.cloud import aiplatform_v1beta1
def sample_delete_custom_job():
# Create a client
client = aiplatform_v1beta1.JobServiceClient()
# Initialize request argument(s)
request = aiplatform_v1beta1.DeleteCustomJobRequest(
name="name_value",
)
# Make the request
operation = client.delete_custom_job(request=request)
print("Waiting for operation to complete...")
response = operation.result()
# Handle the response
print(response)
Name | Description |
request |
Union[google.cloud.aiplatform_v1beta1.types.DeleteCustomJobRequest, dict]
The request object. Request message for JobService.DeleteCustomJob. |
name |
`str`
Required. The name of the CustomJob resource to be deleted. Format: |
retry |
google.api_core.retry.Retry
Designation of what errors, if any, should be retried. |
timeout |
float
The timeout for this request. |
metadata |
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
Type | Description |
google.api_core.operation_async.AsyncOperation | An object representing a long-running operation. The result type for the operation will be `google.protobuf.empty_pb2.Empty` A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); } The JSON representation for Empty is empty JSON object {}. |
delete_data_labeling_job
delete_data_labeling_job(request: Optional[Union[google.cloud.aiplatform_v1beta1.types.job_service.DeleteDataLabelingJobRequest, dict]] = None, *, name: Optional[str] = None, retry: Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault] = <_MethodDefault._DEFAULT_VALUE: <object object>>, timeout: Optional[float] = None, metadata: Sequence[Tuple[str, str]] = ())
Deletes a DataLabelingJob.
from google.cloud import aiplatform_v1beta1
def sample_delete_data_labeling_job():
# Create a client
client = aiplatform_v1beta1.JobServiceClient()
# Initialize request argument(s)
request = aiplatform_v1beta1.DeleteDataLabelingJobRequest(
name="name_value",
)
# Make the request
operation = client.delete_data_labeling_job(request=request)
print("Waiting for operation to complete...")
response = operation.result()
# Handle the response
print(response)
Name | Description |
request |
Union[google.cloud.aiplatform_v1beta1.types.DeleteDataLabelingJobRequest, dict]
The request object. Request message for JobService.DeleteDataLabelingJob. |
name |
`str`
Required. The name of the DataLabelingJob to be deleted. Format: |
retry |
google.api_core.retry.Retry
Designation of what errors, if any, should be retried. |
timeout |
float
The timeout for this request. |
metadata |
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
Type | Description |
google.api_core.operation_async.AsyncOperation | An object representing a long-running operation. The result type for the operation will be `google.protobuf.empty_pb2.Empty` A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); } The JSON representation for Empty is empty JSON object {}. |
delete_hyperparameter_tuning_job
delete_hyperparameter_tuning_job(request: Optional[Union[google.cloud.aiplatform_v1beta1.types.job_service.DeleteHyperparameterTuningJobRequest, dict]] = None, *, name: Optional[str] = None, retry: Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault] = <_MethodDefault._DEFAULT_VALUE: <object object>>, timeout: Optional[float] = None, metadata: Sequence[Tuple[str, str]] = ())
Deletes a HyperparameterTuningJob.
from google.cloud import aiplatform_v1beta1
def sample_delete_hyperparameter_tuning_job():
# Create a client
client = aiplatform_v1beta1.JobServiceClient()
# Initialize request argument(s)
request = aiplatform_v1beta1.DeleteHyperparameterTuningJobRequest(
name="name_value",
)
# Make the request
operation = client.delete_hyperparameter_tuning_job(request=request)
print("Waiting for operation to complete...")
response = operation.result()
# Handle the response
print(response)
Name | Description |
request |
Union[google.cloud.aiplatform_v1beta1.types.DeleteHyperparameterTuningJobRequest, dict]
The request object. Request message for JobService.DeleteHyperparameterTuningJob. |
name |
`str`
Required. The name of the HyperparameterTuningJob resource to be deleted. Format: |
retry |
google.api_core.retry.Retry
Designation of what errors, if any, should be retried. |
timeout |
float
The timeout for this request. |
metadata |
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
Type | Description |
google.api_core.operation_async.AsyncOperation | An object representing a long-running operation. The result type for the operation will be `google.protobuf.empty_pb2.Empty` A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); } The JSON representation for Empty is empty JSON object {}. |
delete_model_deployment_monitoring_job
delete_model_deployment_monitoring_job(request: Optional[Union[google.cloud.aiplatform_v1beta1.types.job_service.DeleteModelDeploymentMonitoringJobRequest, dict]] = None, *, name: Optional[str] = None, retry: Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault] = <_MethodDefault._DEFAULT_VALUE: <object object>>, timeout: Optional[float] = None, metadata: Sequence[Tuple[str, str]] = ())
Deletes a ModelDeploymentMonitoringJob.
from google.cloud import aiplatform_v1beta1
def sample_delete_model_deployment_monitoring_job():
# Create a client
client = aiplatform_v1beta1.JobServiceClient()
# Initialize request argument(s)
request = aiplatform_v1beta1.DeleteModelDeploymentMonitoringJobRequest(
name="name_value",
)
# Make the request
operation = client.delete_model_deployment_monitoring_job(request=request)
print("Waiting for operation to complete...")
response = operation.result()
# Handle the response
print(response)
Name | Description |
request |
Union[google.cloud.aiplatform_v1beta1.types.DeleteModelDeploymentMonitoringJobRequest, dict]
The request object. Request message for JobService.DeleteModelDeploymentMonitoringJob. |
name |
`str`
Required. The resource name of the model monitoring job to delete. Format: |
retry |
google.api_core.retry.Retry
Designation of what errors, if any, should be retried. |
timeout |
float
The timeout for this request. |
metadata |
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
Type | Description |
google.api_core.operation_async.AsyncOperation | An object representing a long-running operation. The result type for the operation will be `google.protobuf.empty_pb2.Empty` A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); } The JSON representation for Empty is empty JSON object {}. |
endpoint_path
endpoint_path(project: str, location: str, endpoint: str)
Returns a fully-qualified endpoint string.
from_service_account_file
from_service_account_file(filename: str, *args, **kwargs)
Creates an instance of this client using the provided credentials file.
Name | Description |
filename |
str
The path to the service account private key json file. |
Type | Description |
JobServiceAsyncClient | The constructed client. |
from_service_account_info
from_service_account_info(info: dict, *args, **kwargs)
Creates an instance of this client using the provided credentials info.
Name | Description |
info |
dict
The service account private key info. |
Type | Description |
JobServiceAsyncClient | The constructed client. |
from_service_account_json
from_service_account_json(filename: str, *args, **kwargs)
Creates an instance of this client using the provided credentials file.
Name | Description |
filename |
str
The path to the service account private key json file. |
Type | Description |
JobServiceAsyncClient | The constructed client. |
get_batch_prediction_job
get_batch_prediction_job(request: Optional[Union[google.cloud.aiplatform_v1beta1.types.job_service.GetBatchPredictionJobRequest, dict]] = None, *, name: Optional[str] = None, retry: Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault] = <_MethodDefault._DEFAULT_VALUE: <object object>>, timeout: Optional[float] = None, metadata: Sequence[Tuple[str, str]] = ())
Gets a BatchPredictionJob
from google.cloud import aiplatform_v1beta1
def sample_get_batch_prediction_job():
# Create a client
client = aiplatform_v1beta1.JobServiceClient()
# Initialize request argument(s)
request = aiplatform_v1beta1.GetBatchPredictionJobRequest(
name="name_value",
)
# Make the request
response = client.get_batch_prediction_job(request=request)
# Handle the response
print(response)
Name | Description |
request |
Union[google.cloud.aiplatform_v1beta1.types.GetBatchPredictionJobRequest, dict]
The request object. Request message for JobService.GetBatchPredictionJob. |
name |
`str`
Required. The name of the BatchPredictionJob resource. Format: |
retry |
google.api_core.retry.Retry
Designation of what errors, if any, should be retried. |
timeout |
float
The timeout for this request. |
metadata |
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
Type | Description |
google.cloud.aiplatform_v1beta1.types.BatchPredictionJob | A job that uses a Model to produce predictions on multiple [input instances][google.cloud.aiplatform.v1beta1.BatchPredictionJob.input_config]. If predictions for significant portion of the instances fail, the job may finish without attempting predictions for all remaining instances. |
get_custom_job
get_custom_job(request: Optional[Union[google.cloud.aiplatform_v1beta1.types.job_service.GetCustomJobRequest, dict]] = None, *, name: Optional[str] = None, retry: Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault] = <_MethodDefault._DEFAULT_VALUE: <object object>>, timeout: Optional[float] = None, metadata: Sequence[Tuple[str, str]] = ())
Gets a CustomJob.
from google.cloud import aiplatform_v1beta1
def sample_get_custom_job():
# Create a client
client = aiplatform_v1beta1.JobServiceClient()
# Initialize request argument(s)
request = aiplatform_v1beta1.GetCustomJobRequest(
name="name_value",
)
# Make the request
response = client.get_custom_job(request=request)
# Handle the response
print(response)
Name | Description |
request |
Union[google.cloud.aiplatform_v1beta1.types.GetCustomJobRequest, dict]
The request object. Request message for JobService.GetCustomJob. |
name |
`str`
Required. The name of the CustomJob resource. Format: |
retry |
google.api_core.retry.Retry
Designation of what errors, if any, should be retried. |
timeout |
float
The timeout for this request. |
metadata |
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
Type | Description |
google.cloud.aiplatform_v1beta1.types.CustomJob | Represents a job that runs custom workloads such as a Docker container or a Python package. A CustomJob can have multiple worker pools and each worker pool can have its own machine and input spec. A CustomJob will be cleaned up once the job enters terminal state (failed or succeeded). |
get_data_labeling_job
get_data_labeling_job(request: Optional[Union[google.cloud.aiplatform_v1beta1.types.job_service.GetDataLabelingJobRequest, dict]] = None, *, name: Optional[str] = None, retry: Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault] = <_MethodDefault._DEFAULT_VALUE: <object object>>, timeout: Optional[float] = None, metadata: Sequence[Tuple[str, str]] = ())
Gets a DataLabelingJob.
from google.cloud import aiplatform_v1beta1
def sample_get_data_labeling_job():
# Create a client
client = aiplatform_v1beta1.JobServiceClient()
# Initialize request argument(s)
request = aiplatform_v1beta1.GetDataLabelingJobRequest(
name="name_value",
)
# Make the request
response = client.get_data_labeling_job(request=request)
# Handle the response
print(response)
Name | Description |
request |
Union[google.cloud.aiplatform_v1beta1.types.GetDataLabelingJobRequest, dict]
The request object. Request message for JobService.GetDataLabelingJob. |
name |
`str`
Required. The name of the DataLabelingJob. Format: |
retry |
google.api_core.retry.Retry
Designation of what errors, if any, should be retried. |
timeout |
float
The timeout for this request. |
metadata |
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
Type | Description |
google.cloud.aiplatform_v1beta1.types.DataLabelingJob | DataLabelingJob is used to trigger a human labeling job on unlabeled data from the following Dataset: |
get_hyperparameter_tuning_job
get_hyperparameter_tuning_job(request: Optional[Union[google.cloud.aiplatform_v1beta1.types.job_service.GetHyperparameterTuningJobRequest, dict]] = None, *, name: Optional[str] = None, retry: Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault] = <_MethodDefault._DEFAULT_VALUE: <object object>>, timeout: Optional[float] = None, metadata: Sequence[Tuple[str, str]] = ())
Gets a HyperparameterTuningJob
from google.cloud import aiplatform_v1beta1
def sample_get_hyperparameter_tuning_job():
# Create a client
client = aiplatform_v1beta1.JobServiceClient()
# Initialize request argument(s)
request = aiplatform_v1beta1.GetHyperparameterTuningJobRequest(
name="name_value",
)
# Make the request
response = client.get_hyperparameter_tuning_job(request=request)
# Handle the response
print(response)
Name | Description |
request |
Union[google.cloud.aiplatform_v1beta1.types.GetHyperparameterTuningJobRequest, dict]
The request object. Request message for JobService.GetHyperparameterTuningJob. |
name |
`str`
Required. The name of the HyperparameterTuningJob resource. Format: |
retry |
google.api_core.retry.Retry
Designation of what errors, if any, should be retried. |
timeout |
float
The timeout for this request. |
metadata |
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
Type | Description |
google.cloud.aiplatform_v1beta1.types.HyperparameterTuningJob | Represents a HyperparameterTuningJob. A HyperparameterTuningJob has a Study specification and multiple CustomJobs with identical CustomJob specification. |
get_model_deployment_monitoring_job
get_model_deployment_monitoring_job(request: Optional[Union[google.cloud.aiplatform_v1beta1.types.job_service.GetModelDeploymentMonitoringJobRequest, dict]] = None, *, name: Optional[str] = None, retry: Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault] = <_MethodDefault._DEFAULT_VALUE: <object object>>, timeout: Optional[float] = None, metadata: Sequence[Tuple[str, str]] = ())
Gets a ModelDeploymentMonitoringJob.
from google.cloud import aiplatform_v1beta1
def sample_get_model_deployment_monitoring_job():
# Create a client
client = aiplatform_v1beta1.JobServiceClient()
# Initialize request argument(s)
request = aiplatform_v1beta1.GetModelDeploymentMonitoringJobRequest(
name="name_value",
)
# Make the request
response = client.get_model_deployment_monitoring_job(request=request)
# Handle the response
print(response)
Name | Description |
request |
Union[google.cloud.aiplatform_v1beta1.types.GetModelDeploymentMonitoringJobRequest, dict]
The request object. Request message for JobService.GetModelDeploymentMonitoringJob. |
name |
`str`
Required. The resource name of the ModelDeploymentMonitoringJob. Format: |
retry |
google.api_core.retry.Retry
Designation of what errors, if any, should be retried. |
timeout |
float
The timeout for this request. |
metadata |
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
Type | Description |
google.cloud.aiplatform_v1beta1.types.ModelDeploymentMonitoringJob | Represents a job that runs periodically to monitor the deployed models in an endpoint. It will analyze the logged training & prediction data to detect any abnormal behaviors. |
get_mtls_endpoint_and_cert_source
get_mtls_endpoint_and_cert_source(
client_options: Optional[google.api_core.client_options.ClientOptions] = None,
)
Return the API endpoint and client cert source for mutual TLS.
The client cert source is determined in the following order:
(1) if GOOGLE_API_USE_CLIENT_CERTIFICATE
environment variable is not "true", the
client cert source is None.
(2) if client_options.client_cert_source
is provided, use the provided one; if the
default client cert source exists, use the default one; otherwise the client cert
source is None.
The API endpoint is determined in the following order:
(1) if client_options.api_endpoint
if provided, use the provided one.
(2) if GOOGLE_API_USE_CLIENT_CERTIFICATE
environment variable is "always", use the
default mTLS endpoint; if the environment variabel is "never", use the default API
endpoint; otherwise if client cert source exists, use the default mTLS endpoint, otherwise
use the default API endpoint.
More details can be found at https://google.aip.dev/auth/4114.
Name | Description |
client_options |
google.api_core.client_options.ClientOptions
Custom options for the client. Only the |
Type | Description |
google.auth.exceptions.MutualTLSChannelError | If any errors happen. |
Type | Description |
Tuple[str, Callable[[], Tuple[bytes, bytes]]] | returns the API endpoint and the client cert source to use. |
get_transport_class
get_transport_class()
Returns an appropriate transport class.
hyperparameter_tuning_job_path
hyperparameter_tuning_job_path(
project: str, location: str, hyperparameter_tuning_job: str
)
Returns a fully-qualified hyperparameter_tuning_job string.
list_batch_prediction_jobs
list_batch_prediction_jobs(request: Optional[Union[google.cloud.aiplatform_v1beta1.types.job_service.ListBatchPredictionJobsRequest, dict]] = None, *, parent: Optional[str] = None, retry: Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault] = <_MethodDefault._DEFAULT_VALUE: <object object>>, timeout: Optional[float] = None, metadata: Sequence[Tuple[str, str]] = ())
Lists BatchPredictionJobs in a Location.
from google.cloud import aiplatform_v1beta1
def sample_list_batch_prediction_jobs():
# Create a client
client = aiplatform_v1beta1.JobServiceClient()
# Initialize request argument(s)
request = aiplatform_v1beta1.ListBatchPredictionJobsRequest(
parent="parent_value",
)
# Make the request
page_result = client.list_batch_prediction_jobs(request=request)
# Handle the response
for response in page_result:
print(response)
Name | Description |
request |
Union[google.cloud.aiplatform_v1beta1.types.ListBatchPredictionJobsRequest, dict]
The request object. Request message for JobService.ListBatchPredictionJobs. |
parent |
`str`
Required. The resource name of the Location to list the BatchPredictionJobs from. Format: |
retry |
google.api_core.retry.Retry
Designation of what errors, if any, should be retried. |
timeout |
float
The timeout for this request. |
metadata |
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
Type | Description |
google.cloud.aiplatform_v1beta1.services.job_service.pagers.ListBatchPredictionJobsAsyncPager | Response message for JobService.ListBatchPredictionJobs Iterating over this object will yield results and resolve additional pages automatically. |
list_custom_jobs
list_custom_jobs(request: Optional[Union[google.cloud.aiplatform_v1beta1.types.job_service.ListCustomJobsRequest, dict]] = None, *, parent: Optional[str] = None, retry: Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault] = <_MethodDefault._DEFAULT_VALUE: <object object>>, timeout: Optional[float] = None, metadata: Sequence[Tuple[str, str]] = ())
Lists CustomJobs in a Location.
from google.cloud import aiplatform_v1beta1
def sample_list_custom_jobs():
# Create a client
client = aiplatform_v1beta1.JobServiceClient()
# Initialize request argument(s)
request = aiplatform_v1beta1.ListCustomJobsRequest(
parent="parent_value",
)
# Make the request
page_result = client.list_custom_jobs(request=request)
# Handle the response
for response in page_result:
print(response)
Name | Description |
request |
Union[google.cloud.aiplatform_v1beta1.types.ListCustomJobsRequest, dict]
The request object. Request message for JobService.ListCustomJobs. |
parent |
`str`
Required. The resource name of the Location to list the CustomJobs from. Format: |
retry |
google.api_core.retry.Retry
Designation of what errors, if any, should be retried. |
timeout |
float
The timeout for this request. |
metadata |
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
Type | Description |
google.cloud.aiplatform_v1beta1.services.job_service.pagers.ListCustomJobsAsyncPager | Response message for JobService.ListCustomJobs Iterating over this object will yield results and resolve additional pages automatically. |
list_data_labeling_jobs
list_data_labeling_jobs(request: Optional[Union[google.cloud.aiplatform_v1beta1.types.job_service.ListDataLabelingJobsRequest, dict]] = None, *, parent: Optional[str] = None, retry: Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault] = <_MethodDefault._DEFAULT_VALUE: <object object>>, timeout: Optional[float] = None, metadata: Sequence[Tuple[str, str]] = ())
Lists DataLabelingJobs in a Location.
from google.cloud import aiplatform_v1beta1
def sample_list_data_labeling_jobs():
# Create a client
client = aiplatform_v1beta1.JobServiceClient()
# Initialize request argument(s)
request = aiplatform_v1beta1.ListDataLabelingJobsRequest(
parent="parent_value",
)
# Make the request
page_result = client.list_data_labeling_jobs(request=request)
# Handle the response
for response in page_result:
print(response)
Name | Description |
request |
Union[google.cloud.aiplatform_v1beta1.types.ListDataLabelingJobsRequest, dict]
The request object. Request message for JobService.ListDataLabelingJobs. |
parent |
`str`
Required. The parent of the DataLabelingJob. Format: |
retry |
google.api_core.retry.Retry
Designation of what errors, if any, should be retried. |
timeout |
float
The timeout for this request. |
metadata |
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
Type | Description |
google.cloud.aiplatform_v1beta1.services.job_service.pagers.ListDataLabelingJobsAsyncPager | Response message for JobService.ListDataLabelingJobs. Iterating over this object will yield results and resolve additional pages automatically. |
list_hyperparameter_tuning_jobs
list_hyperparameter_tuning_jobs(request: Optional[Union[google.cloud.aiplatform_v1beta1.types.job_service.ListHyperparameterTuningJobsRequest, dict]] = None, *, parent: Optional[str] = None, retry: Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault] = <_MethodDefault._DEFAULT_VALUE: <object object>>, timeout: Optional[float] = None, metadata: Sequence[Tuple[str, str]] = ())
Lists HyperparameterTuningJobs in a Location.
from google.cloud import aiplatform_v1beta1
def sample_list_hyperparameter_tuning_jobs():
# Create a client
client = aiplatform_v1beta1.JobServiceClient()
# Initialize request argument(s)
request = aiplatform_v1beta1.ListHyperparameterTuningJobsRequest(
parent="parent_value",
)
# Make the request
page_result = client.list_hyperparameter_tuning_jobs(request=request)
# Handle the response
for response in page_result:
print(response)
Name | Description |
request |
Union[google.cloud.aiplatform_v1beta1.types.ListHyperparameterTuningJobsRequest, dict]
The request object. Request message for JobService.ListHyperparameterTuningJobs. |
parent |
`str`
Required. The resource name of the Location to list the HyperparameterTuningJobs from. Format: |
retry |
google.api_core.retry.Retry
Designation of what errors, if any, should be retried. |
timeout |
float
The timeout for this request. |
metadata |
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
Type | Description |
google.cloud.aiplatform_v1beta1.services.job_service.pagers.ListHyperparameterTuningJobsAsyncPager | Response message for JobService.ListHyperparameterTuningJobs Iterating over this object will yield results and resolve additional pages automatically. |
list_model_deployment_monitoring_jobs
list_model_deployment_monitoring_jobs(request: Optional[Union[google.cloud.aiplatform_v1beta1.types.job_service.ListModelDeploymentMonitoringJobsRequest, dict]] = None, *, parent: Optional[str] = None, retry: Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault] = <_MethodDefault._DEFAULT_VALUE: <object object>>, timeout: Optional[float] = None, metadata: Sequence[Tuple[str, str]] = ())
Lists ModelDeploymentMonitoringJobs in a Location.
from google.cloud import aiplatform_v1beta1
def sample_list_model_deployment_monitoring_jobs():
# Create a client
client = aiplatform_v1beta1.JobServiceClient()
# Initialize request argument(s)
request = aiplatform_v1beta1.ListModelDeploymentMonitoringJobsRequest(
parent="parent_value",
)
# Make the request
page_result = client.list_model_deployment_monitoring_jobs(request=request)
# Handle the response
for response in page_result:
print(response)
Name | Description |
request |
Union[google.cloud.aiplatform_v1beta1.types.ListModelDeploymentMonitoringJobsRequest, dict]
The request object. Request message for JobService.ListModelDeploymentMonitoringJobs. |
parent |
`str`
Required. The parent of the ModelDeploymentMonitoringJob. Format: |
retry |
google.api_core.retry.Retry
Designation of what errors, if any, should be retried. |
timeout |
float
The timeout for this request. |
metadata |
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
Type | Description |
google.cloud.aiplatform_v1beta1.services.job_service.pagers.ListModelDeploymentMonitoringJobsAsyncPager | Response message for JobService.ListModelDeploymentMonitoringJobs. Iterating over this object will yield results and resolve additional pages automatically. |
model_deployment_monitoring_job_path
model_deployment_monitoring_job_path(
project: str, location: str, model_deployment_monitoring_job: str
)
Returns a fully-qualified model_deployment_monitoring_job string.
model_path
model_path(project: str, location: str, model: str)
Returns a fully-qualified model string.
network_path
network_path(project: str, network: str)
Returns a fully-qualified network string.
parse_batch_prediction_job_path
parse_batch_prediction_job_path(path: str)
Parses a batch_prediction_job path into its component segments.
parse_common_billing_account_path
parse_common_billing_account_path(path: str)
Parse a billing_account path into its component segments.
parse_common_folder_path
parse_common_folder_path(path: str)
Parse a folder path into its component segments.
parse_common_location_path
parse_common_location_path(path: str)
Parse a location path into its component segments.
parse_common_organization_path
parse_common_organization_path(path: str)
Parse a organization path into its component segments.
parse_common_project_path
parse_common_project_path(path: str)
Parse a project path into its component segments.
parse_custom_job_path
parse_custom_job_path(path: str)
Parses a custom_job path into its component segments.
parse_data_labeling_job_path
parse_data_labeling_job_path(path: str)
Parses a data_labeling_job path into its component segments.
parse_dataset_path
parse_dataset_path(path: str)
Parses a dataset path into its component segments.
parse_endpoint_path
parse_endpoint_path(path: str)
Parses a endpoint path into its component segments.
parse_hyperparameter_tuning_job_path
parse_hyperparameter_tuning_job_path(path: str)
Parses a hyperparameter_tuning_job path into its component segments.
parse_model_deployment_monitoring_job_path
parse_model_deployment_monitoring_job_path(path: str)
Parses a model_deployment_monitoring_job path into its component segments.
parse_model_path
parse_model_path(path: str)
Parses a model path into its component segments.
parse_network_path
parse_network_path(path: str)
Parses a network path into its component segments.
parse_tensorboard_path
parse_tensorboard_path(path: str)
Parses a tensorboard path into its component segments.
parse_trial_path
parse_trial_path(path: str)
Parses a trial path into its component segments.
pause_model_deployment_monitoring_job
pause_model_deployment_monitoring_job(request: Optional[Union[google.cloud.aiplatform_v1beta1.types.job_service.PauseModelDeploymentMonitoringJobRequest, dict]] = None, *, name: Optional[str] = None, retry: Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault] = <_MethodDefault._DEFAULT_VALUE: <object object>>, timeout: Optional[float] = None, metadata: Sequence[Tuple[str, str]] = ())
Pauses a ModelDeploymentMonitoringJob. If the job is running, the server makes a best effort to cancel the job. Will mark xref_ModelDeploymentMonitoringJob.state to 'PAUSED'.
from google.cloud import aiplatform_v1beta1
def sample_pause_model_deployment_monitoring_job():
# Create a client
client = aiplatform_v1beta1.JobServiceClient()
# Initialize request argument(s)
request = aiplatform_v1beta1.PauseModelDeploymentMonitoringJobRequest(
name="name_value",
)
# Make the request
client.pause_model_deployment_monitoring_job(request=request)
Name | Description |
request |
Union[google.cloud.aiplatform_v1beta1.types.PauseModelDeploymentMonitoringJobRequest, dict]
The request object. Request message for JobService.PauseModelDeploymentMonitoringJob. |
name |
`str`
Required. The resource name of the ModelDeploymentMonitoringJob to pause. Format: |
retry |
google.api_core.retry.Retry
Designation of what errors, if any, should be retried. |
timeout |
float
The timeout for this request. |
metadata |
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
resume_model_deployment_monitoring_job
resume_model_deployment_monitoring_job(request: Optional[Union[google.cloud.aiplatform_v1beta1.types.job_service.ResumeModelDeploymentMonitoringJobRequest, dict]] = None, *, name: Optional[str] = None, retry: Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault] = <_MethodDefault._DEFAULT_VALUE: <object object>>, timeout: Optional[float] = None, metadata: Sequence[Tuple[str, str]] = ())
Resumes a paused ModelDeploymentMonitoringJob. It will start to run from next scheduled time. A deleted ModelDeploymentMonitoringJob can't be resumed.
from google.cloud import aiplatform_v1beta1
def sample_resume_model_deployment_monitoring_job():
# Create a client
client = aiplatform_v1beta1.JobServiceClient()
# Initialize request argument(s)
request = aiplatform_v1beta1.ResumeModelDeploymentMonitoringJobRequest(
name="name_value",
)
# Make the request
client.resume_model_deployment_monitoring_job(request=request)
Name | Description |
request |
Union[google.cloud.aiplatform_v1beta1.types.ResumeModelDeploymentMonitoringJobRequest, dict]
The request object. Request message for JobService.ResumeModelDeploymentMonitoringJob. |
name |
`str`
Required. The resource name of the ModelDeploymentMonitoringJob to resume. Format: |
retry |
google.api_core.retry.Retry
Designation of what errors, if any, should be retried. |
timeout |
float
The timeout for this request. |
metadata |
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
search_model_deployment_monitoring_stats_anomalies
search_model_deployment_monitoring_stats_anomalies(request: Optional[Union[google.cloud.aiplatform_v1beta1.types.job_service.SearchModelDeploymentMonitoringStatsAnomaliesRequest, dict]] = None, *, model_deployment_monitoring_job: Optional[str] = None, deployed_model_id: Optional[str] = None, retry: Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault] = <_MethodDefault._DEFAULT_VALUE: <object object>>, timeout: Optional[float] = None, metadata: Sequence[Tuple[str, str]] = ())
Searches Model Monitoring Statistics generated within a given time window.
from google.cloud import aiplatform_v1beta1
def sample_search_model_deployment_monitoring_stats_anomalies():
# Create a client
client = aiplatform_v1beta1.JobServiceClient()
# Initialize request argument(s)
request = aiplatform_v1beta1.SearchModelDeploymentMonitoringStatsAnomaliesRequest(
model_deployment_monitoring_job="model_deployment_monitoring_job_value",
deployed_model_id="deployed_model_id_value",
)
# Make the request
page_result = client.search_model_deployment_monitoring_stats_anomalies(request=request)
# Handle the response
for response in page_result:
print(response)
Name | Description |
request |
Union[google.cloud.aiplatform_v1beta1.types.SearchModelDeploymentMonitoringStatsAnomaliesRequest, dict]
The request object. Request message for JobService.SearchModelDeploymentMonitoringStatsAnomalies. |
model_deployment_monitoring_job |
`str`
Required. ModelDeploymentMonitoring Job resource name. Format: |
deployed_model_id |
`str`
Required. The DeployedModel ID of the [ModelDeploymentMonitoringObjectiveConfig.deployed_model_id]. This corresponds to the |
retry |
google.api_core.retry.Retry
Designation of what errors, if any, should be retried. |
timeout |
float
The timeout for this request. |
metadata |
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
Type | Description |
google.cloud.aiplatform_v1beta1.services.job_service.pagers.SearchModelDeploymentMonitoringStatsAnomaliesAsyncPager | Response message for JobService.SearchModelDeploymentMonitoringStatsAnomalies. Iterating over this object will yield results and resolve additional pages automatically. |
tensorboard_path
tensorboard_path(project: str, location: str, tensorboard: str)
Returns a fully-qualified tensorboard string.
trial_path
trial_path(project: str, location: str, study: str, trial: str)
Returns a fully-qualified trial string.
update_model_deployment_monitoring_job
update_model_deployment_monitoring_job(request: Optional[Union[google.cloud.aiplatform_v1beta1.types.job_service.UpdateModelDeploymentMonitoringJobRequest, dict]] = None, *, model_deployment_monitoring_job: Optional[google.cloud.aiplatform_v1beta1.types.model_deployment_monitoring_job.ModelDeploymentMonitoringJob] = None, update_mask: Optional[google.protobuf.field_mask_pb2.FieldMask] = None, retry: Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault] = <_MethodDefault._DEFAULT_VALUE: <object object>>, timeout: Optional[float] = None, metadata: Sequence[Tuple[str, str]] = ())
Updates a ModelDeploymentMonitoringJob.
from google.cloud import aiplatform_v1beta1
def sample_update_model_deployment_monitoring_job():
# Create a client
client = aiplatform_v1beta1.JobServiceClient()
# Initialize request argument(s)
model_deployment_monitoring_job = aiplatform_v1beta1.ModelDeploymentMonitoringJob()
model_deployment_monitoring_job.display_name = "display_name_value"
model_deployment_monitoring_job.endpoint = "endpoint_value"
request = aiplatform_v1beta1.UpdateModelDeploymentMonitoringJobRequest(
model_deployment_monitoring_job=model_deployment_monitoring_job,
)
# Make the request
operation = client.update_model_deployment_monitoring_job(request=request)
print("Waiting for operation to complete...")
response = operation.result()
# Handle the response
print(response)
Name | Description |
request |
Union[google.cloud.aiplatform_v1beta1.types.UpdateModelDeploymentMonitoringJobRequest, dict]
The request object. Request message for JobService.UpdateModelDeploymentMonitoringJob. |
model_deployment_monitoring_job |
ModelDeploymentMonitoringJob
Required. The model monitoring configuration which replaces the resource on the server. This corresponds to the |
update_mask |
`google.protobuf.field_mask_pb2.FieldMask`
Required. The update mask is used to specify the fields to be overwritten in the ModelDeploymentMonitoringJob resource by the update. The fields specified in the update_mask are relative to the resource, not the full request. A field will be overwritten if it is in the mask. If the user does not provide a mask then only the non-empty fields present in the request will be overwritten. Set the update_mask to |
retry |
google.api_core.retry.Retry
Designation of what errors, if any, should be retried. |
timeout |
float
The timeout for this request. |
metadata |
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
Type | Description |
google.api_core.operation_async.AsyncOperation | An object representing a long-running operation. The result type for the operation will be ModelDeploymentMonitoringJob Represents a job that runs periodically to monitor the deployed models in an endpoint. It will analyze the logged training & prediction data to detect any abnormal behaviors. |