Package types (1.1.1)

API documentation for aiplatform_v1.types package.

Classes

ActiveLearningConfig

Parameters that configure the active learning pipeline. Active learning will label the data incrementally by several iterations. For every iteration, it will select a batch of data based on the sampling strategy.

Annotation

Used to assign specific AnnotationSpec to a particular area of a DataItem or the whole part of the DataItem.

AnnotationSpec

Identifies a concept with which DataItems may be annotated with.

AutomaticResources

A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration. Each Model supporting these resources documents its specific guidelines.

BatchDedicatedResources

A description of resources that are used for performing batch operations, are dedicated to a Model, and need manual configuration.

BatchMigrateResourcesOperationMetadata

Runtime operation information for MigrationService.BatchMigrateResources.

BatchMigrateResourcesRequest

Request message for MigrationService.BatchMigrateResources.

BatchMigrateResourcesResponse

Response message for MigrationService.BatchMigrateResources.

BatchPredictionJob

A job that uses a Model to produce predictions on multiple [input instances][google.cloud.aiplatform.v1.BatchPredictionJob.input_config]. If predictions for significant portion of the instances fail, the job may finish without attempting predictions for all remaining instances.

BigQueryDestination

The BigQuery location for the output content. .. attribute:: output_uri

Required. BigQuery URI to a project or table, up to 2000 characters long.

When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist.

Accepted forms:

  • BigQuery path. For example: bq://projectId or bq://projectId.bqDatasetId.bqTableId.

    :type: str

BigQuerySource

The BigQuery location for the input content. .. attribute:: input_uri

Required. BigQuery URI to a table, up to 2000 characters long. Accepted forms:

  • BigQuery path. For example: bq://projectId.bqDatasetId.bqTableId.

    :type: str

CancelBatchPredictionJobRequest

Request message for JobService.CancelBatchPredictionJob.

CancelCustomJobRequest

Request message for JobService.CancelCustomJob.

CancelDataLabelingJobRequest

Request message for JobService.CancelDataLabelingJob.

CancelHyperparameterTuningJobRequest

Request message for JobService.CancelHyperparameterTuningJob.

CancelTrainingPipelineRequest

Request message for PipelineService.CancelTrainingPipeline.

CompletionStats

Success and error statistics of processing multiple entities (for example, DataItems or structured data rows) in batch.

ContainerRegistryDestination

The Container Registry location for the container image. .. attribute:: output_uri

Required. Container Registry URI of a container image. Only Google Container Registry and Artifact Registry are supported now. Accepted forms:

  • Google Container Registry path. For example: gcr.io/projectId/imageName:tag.

  • Artifact Registry path. For example: us-central1-docker.pkg.dev/projectId/repoName/imageName:tag.

    If a tag is not specified, "latest" will be used as the default tag.

    :type: str

ContainerSpec

The spec of a Container. .. attribute:: image_uri

Required. The URI of a container image in the Container Registry that is to be run on each worker replica.

:type: str

CreateBatchPredictionJobRequest

Request message for JobService.CreateBatchPredictionJob.

CreateCustomJobRequest

Request message for JobService.CreateCustomJob.

CreateDataLabelingJobRequest

Request message for JobService.CreateDataLabelingJob.

CreateDatasetOperationMetadata

Runtime operation information for DatasetService.CreateDataset.

CreateDatasetRequest

Request message for DatasetService.CreateDataset.

CreateEndpointOperationMetadata

Runtime operation information for EndpointService.CreateEndpoint.

CreateEndpointRequest

Request message for EndpointService.CreateEndpoint.

CreateHyperparameterTuningJobRequest

Request message for JobService.CreateHyperparameterTuningJob.

CreateSpecialistPoolOperationMetadata

Runtime operation information for SpecialistPoolService.CreateSpecialistPool.

CreateSpecialistPoolRequest

Request message for SpecialistPoolService.CreateSpecialistPool.

CreateTrainingPipelineRequest

Request message for PipelineService.CreateTrainingPipeline.

CustomJob

Represents a job that runs custom workloads such as a Docker container or a Python package. A CustomJob can have multiple worker pools and each worker pool can have its own machine and input spec. A CustomJob will be cleaned up once the job enters terminal state (failed or succeeded).

CustomJobSpec

Represents the spec of a CustomJob. .. attribute:: worker_pool_specs

Required. The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.

:type: Sequence[google.cloud.aiplatform_v1.types.WorkerPoolSpec]

DataItem

A piece of data in a Dataset. Could be an image, a video, a document or plain text.

DataLabelingJob

DataLabelingJob is used to trigger a human labeling job on unlabeled data from the following Dataset:

Dataset

A collection of DataItems and Annotations on them. .. attribute:: name

Output only. The resource name of the Dataset.

:type: str

DedicatedResources

A description of resources that are dedicated to a DeployedModel, and that need a higher degree of manual configuration.

DeleteBatchPredictionJobRequest

Request message for JobService.DeleteBatchPredictionJob.

DeleteCustomJobRequest

Request message for JobService.DeleteCustomJob.

DeleteDataLabelingJobRequest

Request message for JobService.DeleteDataLabelingJob.

DeleteDatasetRequest

Request message for DatasetService.DeleteDataset.

DeleteEndpointRequest

Request message for EndpointService.DeleteEndpoint.

DeleteHyperparameterTuningJobRequest

Request message for JobService.DeleteHyperparameterTuningJob.

DeleteModelRequest

Request message for ModelService.DeleteModel.

DeleteOperationMetadata

Details of operations that perform deletes of any entities. .. attribute:: generic_metadata

The common part of the operation metadata.

:type: google.cloud.aiplatform_v1.types.GenericOperationMetadata

DeleteSpecialistPoolRequest

Request message for SpecialistPoolService.DeleteSpecialistPool.

DeleteTrainingPipelineRequest

Request message for PipelineService.DeleteTrainingPipeline.

DeployModelOperationMetadata

Runtime operation information for EndpointService.DeployModel.

DeployModelRequest

Request message for EndpointService.DeployModel.

DeployModelResponse

Response message for EndpointService.DeployModel.

DeployedModel

A deployment of a Model. Endpoints contain one or more DeployedModels.

DeployedModelRef

Points to a DeployedModel. .. attribute:: endpoint

Immutable. A resource name of an Endpoint.

:type: str

DiskSpec

Represents the spec of disk options. .. attribute:: boot_disk_type

Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).

:type: str

EncryptionSpec

Represents a customer-managed encryption key spec that can be applied to a top-level resource.

Endpoint

Models are deployed into it, and afterwards Endpoint is called to obtain predictions and explanations.

EnvVar

Represents an environment variable present in a Container or Python Module.

ExportDataConfig

Describes what part of the Dataset is to be exported, the destination of the export and how to export.

ExportDataOperationMetadata

Runtime operation information for DatasetService.ExportData.

ExportDataRequest

Request message for DatasetService.ExportData.

ExportDataResponse

Response message for DatasetService.ExportData.

ExportModelOperationMetadata

Details of ModelService.ExportModel operation.

ExportModelRequest

Request message for ModelService.ExportModel.

ExportModelResponse

Response message of ModelService.ExportModel operation.

FilterSplit

Assigns input data to training, validation, and test sets based on the given filters, data pieces not matched by any filter are ignored. Currently only supported for Datasets containing DataItems. If any of the filters in this message are to match nothing, then they can be set as '-' (the minus sign).

Supported only for unstructured Datasets.

FractionSplit

Assigns the input data to training, validation, and test sets as per the given fractions. Any of training_fraction, validation_fraction and test_fraction may optionally be provided, they must sum to up to 1. If the provided ones sum to less than 1, the remainder is assigned to sets as decided by Vertex AI. If none of the fractions are set, by default roughly 80% of data is used for training, 10% for validation, and 10% for test.

GcsDestination

The Google Cloud Storage location where the output is to be written to.

GcsSource

The Google Cloud Storage location for the input content. .. attribute:: uris

Required. Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.

:type: Sequence[str]

GenericOperationMetadata

Generic Metadata shared by all operations. .. attribute:: partial_failures

Output only. Partial failures encountered. E.g. single files that couldn't be read. This field should never exceed 20 entries. Status details field will contain standard GCP error details.

:type: Sequence[google.rpc.status_pb2.Status]

GetAnnotationSpecRequest

Request message for DatasetService.GetAnnotationSpec.

GetBatchPredictionJobRequest

Request message for JobService.GetBatchPredictionJob.

GetCustomJobRequest

Request message for JobService.GetCustomJob.

GetDataLabelingJobRequest

Request message for JobService.GetDataLabelingJob.

GetDatasetRequest

Request message for DatasetService.GetDataset.

GetEndpointRequest

Request message for EndpointService.GetEndpoint

GetHyperparameterTuningJobRequest

Request message for JobService.GetHyperparameterTuningJob.

GetModelEvaluationRequest

Request message for ModelService.GetModelEvaluation.

GetModelEvaluationSliceRequest

Request message for ModelService.GetModelEvaluationSlice.

GetModelRequest

Request message for ModelService.GetModel.

GetSpecialistPoolRequest

Request message for SpecialistPoolService.GetSpecialistPool.

GetTrainingPipelineRequest

Request message for PipelineService.GetTrainingPipeline.

HyperparameterTuningJob

Represents a HyperparameterTuningJob. A HyperparameterTuningJob has a Study specification and multiple CustomJobs with identical CustomJob specification.

ImportDataConfig

Describes the location from where we import data into a Dataset, together with the labels that will be applied to the DataItems and the Annotations.

ImportDataOperationMetadata

Runtime operation information for DatasetService.ImportData.

ImportDataRequest

Request message for DatasetService.ImportData.

ImportDataResponse

Response message for DatasetService.ImportData.

InputDataConfig

Specifies Vertex AI owned input data to be used for training, and possibly evaluating, the Model.

ListAnnotationsRequest

Request message for DatasetService.ListAnnotations.

ListAnnotationsResponse

Response message for DatasetService.ListAnnotations.

ListBatchPredictionJobsRequest

Request message for JobService.ListBatchPredictionJobs.

ListBatchPredictionJobsResponse

Response message for JobService.ListBatchPredictionJobs

ListCustomJobsRequest

Request message for JobService.ListCustomJobs.

ListCustomJobsResponse

Response message for JobService.ListCustomJobs

ListDataItemsRequest

Request message for DatasetService.ListDataItems.

ListDataItemsResponse

Response message for DatasetService.ListDataItems.

ListDataLabelingJobsRequest

Request message for JobService.ListDataLabelingJobs.

ListDataLabelingJobsResponse

Response message for JobService.ListDataLabelingJobs.

ListDatasetsRequest

Request message for DatasetService.ListDatasets.

ListDatasetsResponse

Response message for DatasetService.ListDatasets.

ListEndpointsRequest

Request message for EndpointService.ListEndpoints.

ListEndpointsResponse

Response message for EndpointService.ListEndpoints.

ListHyperparameterTuningJobsRequest

Request message for JobService.ListHyperparameterTuningJobs.

ListHyperparameterTuningJobsResponse

Response message for JobService.ListHyperparameterTuningJobs

ListModelEvaluationSlicesRequest

Request message for ModelService.ListModelEvaluationSlices.

ListModelEvaluationSlicesResponse

Response message for ModelService.ListModelEvaluationSlices.

ListModelEvaluationsRequest

Request message for ModelService.ListModelEvaluations.

ListModelEvaluationsResponse

Response message for ModelService.ListModelEvaluations.

ListModelsRequest

Request message for ModelService.ListModels.

ListModelsResponse

Response message for ModelService.ListModels

ListSpecialistPoolsRequest

Request message for SpecialistPoolService.ListSpecialistPools.

ListSpecialistPoolsResponse

Response message for SpecialistPoolService.ListSpecialistPools.

ListTrainingPipelinesRequest

Request message for PipelineService.ListTrainingPipelines.

ListTrainingPipelinesResponse

Response message for PipelineService.ListTrainingPipelines

MachineSpec

Specification of a single machine. .. attribute:: machine_type

Immutable. The type of the machine.

See the list of machine types supported for prediction <https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types>__

See the list of machine types supported for custom training <https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types>__.

For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.

:type: str

ManualBatchTuningParameters

Manual batch tuning parameters. .. attribute:: batch_size

Immutable. The number of the records (e.g. instances) of the operation given in each batch to a machine replica. Machine type, and size of a single record should be considered when setting this parameter, higher value speeds up the batch operation's execution, but too high value will result in a whole batch not fitting in a machine's memory, and the whole operation will fail. The default value is 4.

:type: int

Measurement

A message representing a Measurement of a Trial. A Measurement contains the Metrics got by executing a Trial using suggested hyperparameter values.

MigratableResource

Represents one resource that exists in automl.googleapis.com, datalabeling.googleapis.com or ml.googleapis.com.

MigrateResourceRequest

Config of migrating one resource from automl.googleapis.com, datalabeling.googleapis.com and ml.googleapis.com to Vertex AI.

MigrateResourceResponse

Describes a successfully migrated resource. .. attribute:: dataset

Migrated Dataset's resource name.

:type: str

Model

A trained machine learning Model. .. attribute:: name

The resource name of the Model.

:type: str

ModelContainerSpec

Specification of a container for serving predictions. Some fields in this message correspond to fields in the Kubernetes Container v1 core specification <https://v1-18.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#container-v1-core>__.

ModelEvaluation

A collection of metrics calculated by comparing Model's predictions on all of the test data against annotations from the test data.

ModelEvaluationSlice

A collection of metrics calculated by comparing Model's predictions on a slice of the test data against ground truth annotations.

Port

Represents a network port in a container. .. attribute:: container_port

The number of the port to expose on the pod's IP address. Must be a valid port number, between 1 and 65535 inclusive.

:type: int

PredefinedSplit

Assigns input data to training, validation, and test sets based on the value of a provided key.

Supported only for tabular Datasets.

PredictRequest

Request message for PredictionService.Predict.

PredictResponse

Response message for PredictionService.Predict.

PredictSchemata

Contains the schemata used in Model's predictions and explanations via PredictionService.Predict, [PredictionService.Explain][] and BatchPredictionJob.

PythonPackageSpec

The spec of a Python packaged code. .. attribute:: executor_image_uri

Required. The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training <https://cloud.google.com/vertex-ai/docs/training/pre-built-containers>__. You must use an image from this list.

:type: str

ResourcesConsumed

Statistics information about resource consumption. .. attribute:: replica_hours

Output only. The number of replica hours used. Note that many replicas may run in parallel, and additionally any given work may be queued for some time. Therefore this value is not strictly related to wall time.

:type: float

SampleConfig

Active learning data sampling config. For every active learning labeling iteration, it will select a batch of data based on the sampling strategy.

Scheduling

All parameters related to queuing and scheduling of custom jobs.

SearchMigratableResourcesRequest

Request message for MigrationService.SearchMigratableResources.

SearchMigratableResourcesResponse

Response message for MigrationService.SearchMigratableResources.

SpecialistPool

SpecialistPool represents customers' own workforce to work on their data labeling jobs. It includes a group of specialist managers who are responsible for managing the labelers in this pool as well as customers' data labeling jobs associated with this pool. Customers create specialist pool as well as start data labeling jobs on Cloud, managers and labelers work with the jobs using CrowdCompute console.

StudySpec

Represents specification of a Study. .. attribute:: metrics

Required. Metric specs for the Study.

:type: Sequence[google.cloud.aiplatform_v1.types.StudySpec.MetricSpec]

TimestampSplit

Assigns input data to training, validation, and test sets based on a provided timestamps. The youngest data pieces are assigned to training set, next to validation set, and the oldest to the test set. Supported only for tabular Datasets.

TrainingConfig

CMLE training config. For every active learning labeling iteration, system will train a machine learning model on CMLE. The trained model will be used by data sampling algorithm to select DataItems.

TrainingPipeline

The TrainingPipeline orchestrates tasks associated with training a Model. It always executes the training task, and optionally may also export data from Vertex AI's Dataset which becomes the training input, upload the Model to Vertex AI, and evaluate the Model.

Trial

A message representing a Trial. A Trial contains a unique set of Parameters that has been or will be evaluated, along with the objective metrics got by running the Trial.

UndeployModelOperationMetadata

Runtime operation information for EndpointService.UndeployModel.

UndeployModelRequest

Request message for EndpointService.UndeployModel.

UndeployModelResponse

Response message for EndpointService.UndeployModel.

UpdateDatasetRequest

Request message for DatasetService.UpdateDataset.

UpdateEndpointRequest

Request message for EndpointService.UpdateEndpoint.

UpdateModelRequest

Request message for ModelService.UpdateModel.

UpdateSpecialistPoolOperationMetadata

Runtime operation metadata for SpecialistPoolService.UpdateSpecialistPool.

UpdateSpecialistPoolRequest

Request message for SpecialistPoolService.UpdateSpecialistPool.

UploadModelOperationMetadata

Details of ModelService.UploadModel operation.

UploadModelRequest

Request message for ModelService.UploadModel.

UploadModelResponse

Response message of ModelService.UploadModel operation.

UserActionReference

References an API call. It contains more information about long running operation and Jobs that are triggered by the API call.

WorkerPoolSpec

Represents the spec of a worker pool in a job. .. attribute:: container_spec

The custom container task.

:type: google.cloud.aiplatform_v1.types.ContainerSpec