Package types (1.5.0)

API documentation for aiplatform_v1.types package.

Classes

ActiveLearningConfig

Parameters that configure the active learning pipeline. Active learning will label the data incrementally by several iterations. For every iteration, it will select a batch of data based on the sampling strategy.

AddTrialMeasurementRequest

Request message for VizierService.AddTrialMeasurement.

Annotation

Used to assign specific AnnotationSpec to a particular area of a DataItem or the whole part of the DataItem.

AnnotationSpec

Identifies a concept with which DataItems may be annotated with.

Artifact

Instance of a general artifact. .. attribute:: name

Output only. The resource name of the Artifact.

:type: str

Attribution

Attribution that explains a particular prediction output. .. attribute:: baseline_output_value

Output only. Model predicted output if the input instance is constructed from the baselines of all the features defined in ExplanationMetadata.inputs. The field name of the output is determined by the key in ExplanationMetadata.outputs.

If the Model's predicted output has multiple dimensions (rank > 1), this is the value in the output located by output_index.

If there are multiple baselines, their output values are averaged.

:type: float

AutomaticResources

A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration. Each Model supporting these resources documents its specific guidelines.

AutoscalingMetricSpec

The metric specification that defines the target resource utilization (CPU utilization, accelerator's duty cycle, and so on) for calculating the desired replica count.

BatchDedicatedResources

A description of resources that are used for performing batch operations, are dedicated to a Model, and need manual configuration.

BatchMigrateResourcesOperationMetadata

Runtime operation information for MigrationService.BatchMigrateResources.

BatchMigrateResourcesRequest

Request message for MigrationService.BatchMigrateResources.

BatchMigrateResourcesResponse

Response message for MigrationService.BatchMigrateResources.

BatchPredictionJob

A job that uses a Model to produce predictions on multiple [input instances][google.cloud.aiplatform.v1.BatchPredictionJob.input_config]. If predictions for significant portion of the instances fail, the job may finish without attempting predictions for all remaining instances.

BigQueryDestination

The BigQuery location for the output content. .. attribute:: output_uri

Required. BigQuery URI to a project or table, up to 2000 characters long.

When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist.

Accepted forms:

  • BigQuery path. For example: bq://projectId or bq://projectId.bqDatasetId or bq://projectId.bqDatasetId.bqTableId.

    :type: str

BigQuerySource

The BigQuery location for the input content. .. attribute:: input_uri

Required. BigQuery URI to a table, up to 2000 characters long. Accepted forms:

  • BigQuery path. For example: bq://projectId.bqDatasetId.bqTableId.

    :type: str

CancelBatchPredictionJobRequest

Request message for JobService.CancelBatchPredictionJob.

CancelCustomJobRequest

Request message for JobService.CancelCustomJob.

CancelDataLabelingJobRequest

Request message for JobService.CancelDataLabelingJob.

CancelHyperparameterTuningJobRequest

Request message for JobService.CancelHyperparameterTuningJob.

CancelPipelineJobRequest

Request message for PipelineService.CancelPipelineJob.

CancelTrainingPipelineRequest

Request message for PipelineService.CancelTrainingPipeline.

CheckTrialEarlyStoppingStateMetatdata

This message will be placed in the metadata field of a google.longrunning.Operation associated with a CheckTrialEarlyStoppingState request.

CheckTrialEarlyStoppingStateRequest

Request message for VizierService.CheckTrialEarlyStoppingState.

CheckTrialEarlyStoppingStateResponse

Response message for VizierService.CheckTrialEarlyStoppingState.

CompleteTrialRequest

Request message for VizierService.CompleteTrial.

CompletionStats

Success and error statistics of processing multiple entities (for example, DataItems or structured data rows) in batch.

ContainerRegistryDestination

The Container Registry location for the container image. .. attribute:: output_uri

Required. Container Registry URI of a container image. Only Google Container Registry and Artifact Registry are supported now. Accepted forms:

  • Google Container Registry path. For example: gcr.io/projectId/imageName:tag.

  • Artifact Registry path. For example: us-central1-docker.pkg.dev/projectId/repoName/imageName:tag.

    If a tag is not specified, "latest" will be used as the default tag.

    :type: str

ContainerSpec

The spec of a Container. .. attribute:: image_uri

Required. The URI of a container image in the Container Registry that is to be run on each worker replica.

:type: str

Context

Instance of a general context. .. attribute:: name

Output only. The resource name of the Context.

:type: str

CreateBatchPredictionJobRequest

Request message for JobService.CreateBatchPredictionJob.

CreateCustomJobRequest

Request message for JobService.CreateCustomJob.

CreateDataLabelingJobRequest

Request message for JobService.CreateDataLabelingJob.

CreateDatasetOperationMetadata

Runtime operation information for DatasetService.CreateDataset.

CreateDatasetRequest

Request message for DatasetService.CreateDataset.

CreateEndpointOperationMetadata

Runtime operation information for EndpointService.CreateEndpoint.

CreateEndpointRequest

Request message for EndpointService.CreateEndpoint.

CreateHyperparameterTuningJobRequest

Request message for JobService.CreateHyperparameterTuningJob.

CreateIndexEndpointOperationMetadata

Runtime operation information for IndexEndpointService.CreateIndexEndpoint.

CreateIndexEndpointRequest

Request message for IndexEndpointService.CreateIndexEndpoint.

CreateIndexOperationMetadata

Runtime operation information for IndexService.CreateIndex.

CreateIndexRequest

Request message for IndexService.CreateIndex.

CreateModelDeploymentMonitoringJobRequest

Request message for JobService.CreateModelDeploymentMonitoringJob.

CreatePipelineJobRequest

Request message for PipelineService.CreatePipelineJob.

CreateSpecialistPoolOperationMetadata

Runtime operation information for SpecialistPoolService.CreateSpecialistPool.

CreateSpecialistPoolRequest

Request message for SpecialistPoolService.CreateSpecialistPool.

CreateStudyRequest

Request message for VizierService.CreateStudy.

CreateTrainingPipelineRequest

Request message for PipelineService.CreateTrainingPipeline.

CreateTrialRequest

Request message for VizierService.CreateTrial.

CustomJob

Represents a job that runs custom workloads such as a Docker container or a Python package. A CustomJob can have multiple worker pools and each worker pool can have its own machine and input spec. A CustomJob will be cleaned up once the job enters terminal state (failed or succeeded).

CustomJobSpec

Represents the spec of a CustomJob. .. attribute:: worker_pool_specs

Required. The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.

:type: Sequence[google.cloud.aiplatform_v1.types.WorkerPoolSpec]

DataItem

A piece of data in a Dataset. Could be an image, a video, a document or plain text.

DataLabelingJob

DataLabelingJob is used to trigger a human labeling job on unlabeled data from the following Dataset:

Dataset

A collection of DataItems and Annotations on them. .. attribute:: name

Output only. The resource name of the Dataset.

:type: str

DedicatedResources

A description of resources that are dedicated to a DeployedModel, and that need a higher degree of manual configuration.

DeleteBatchPredictionJobRequest

Request message for JobService.DeleteBatchPredictionJob.

DeleteCustomJobRequest

Request message for JobService.DeleteCustomJob.

DeleteDataLabelingJobRequest

Request message for JobService.DeleteDataLabelingJob.

DeleteDatasetRequest

Request message for DatasetService.DeleteDataset.

DeleteEndpointRequest

Request message for EndpointService.DeleteEndpoint.

DeleteHyperparameterTuningJobRequest

Request message for JobService.DeleteHyperparameterTuningJob.

DeleteIndexEndpointRequest

Request message for IndexEndpointService.DeleteIndexEndpoint.

DeleteIndexRequest

Request message for IndexService.DeleteIndex.

DeleteModelDeploymentMonitoringJobRequest

Request message for JobService.DeleteModelDeploymentMonitoringJob.

DeleteModelRequest

Request message for ModelService.DeleteModel.

DeleteOperationMetadata

Details of operations that perform deletes of any entities. .. attribute:: generic_metadata

The common part of the operation metadata.

:type: google.cloud.aiplatform_v1.types.GenericOperationMetadata

DeletePipelineJobRequest

Request message for PipelineService.DeletePipelineJob.

DeleteSpecialistPoolRequest

Request message for SpecialistPoolService.DeleteSpecialistPool.

DeleteStudyRequest

Request message for VizierService.DeleteStudy.

DeleteTrainingPipelineRequest

Request message for PipelineService.DeleteTrainingPipeline.

DeleteTrialRequest

Request message for VizierService.DeleteTrial.

DeployIndexOperationMetadata

Runtime operation information for IndexEndpointService.DeployIndex.

DeployIndexRequest

Request message for IndexEndpointService.DeployIndex.

DeployIndexResponse

Response message for IndexEndpointService.DeployIndex.

DeployModelOperationMetadata

Runtime operation information for EndpointService.DeployModel.

DeployModelRequest

Request message for EndpointService.DeployModel.

DeployModelResponse

Response message for EndpointService.DeployModel.

DeployedIndex

A deployment of an Index. IndexEndpoints contain one or more DeployedIndexes.

DeployedIndexAuthConfig

Used to set up the auth on the DeployedIndex's private endpoint.

DeployedIndexRef

Points to a DeployedIndex. .. attribute:: index_endpoint

Immutable. A resource name of the IndexEndpoint.

:type: str

DeployedModel

A deployment of a Model. Endpoints contain one or more DeployedModels.

DeployedModelRef

Points to a DeployedModel. .. attribute:: endpoint

Immutable. A resource name of an Endpoint.

:type: str

DiskSpec

Represents the spec of disk options. .. attribute:: boot_disk_type

Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).

:type: str

EncryptionSpec

Represents a customer-managed encryption key spec that can be applied to a top-level resource.

Endpoint

Models are deployed into it, and afterwards Endpoint is called to obtain predictions and explanations.

EnvVar

Represents an environment variable present in a Container or Python Module.

Execution

Instance of a general execution. .. attribute:: name

Output only. The resource name of the Execution.

:type: str

ExplainRequest

Request message for PredictionService.Explain.

ExplainResponse

Response message for PredictionService.Explain.

Explanation

Explanation of a prediction (provided in PredictResponse.predictions) produced by the Model on a given instance.

ExplanationMetadata

Metadata describing the Model's input and output for explanation.

ExplanationMetadataOverride

The ExplanationMetadata entries that can be overridden at [online explanation][google.cloud.aiplatform.v1.PredictionService.Explain] time.

ExplanationParameters

Parameters to configure explaining for Model's predictions. .. attribute:: sampled_shapley_attribution

An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. Refer to this paper for model details: https://arxiv.org/abs/1306.4265.

:type: google.cloud.aiplatform_v1.types.SampledShapleyAttribution

ExplanationSpec

Specification of Model explanation. .. attribute:: parameters

Required. Parameters that configure explaining of the Model's predictions.

:type: google.cloud.aiplatform_v1.types.ExplanationParameters

ExplanationSpecOverride

The ExplanationSpec entries that can be overridden at [online explanation][google.cloud.aiplatform.v1.PredictionService.Explain] time.

ExportDataConfig

Describes what part of the Dataset is to be exported, the destination of the export and how to export.

ExportDataOperationMetadata

Runtime operation information for DatasetService.ExportData.

ExportDataRequest

Request message for DatasetService.ExportData.

ExportDataResponse

Response message for DatasetService.ExportData.

ExportModelOperationMetadata

Details of ModelService.ExportModel operation.

ExportModelRequest

Request message for ModelService.ExportModel.

ExportModelResponse

Response message of ModelService.ExportModel operation.

FeatureNoiseSigma

Noise sigma by features. Noise sigma represents the standard deviation of the gaussian kernel that will be used to add noise to interpolated inputs prior to computing gradients.

FeatureStatsAnomaly

Stats and Anomaly generated at specific timestamp for specific Feature. The start_time and end_time are used to define the time range of the dataset that current stats belongs to, e.g. prediction traffic is bucketed into prediction datasets by time window. If the Dataset is not defined by time window, start_time = end_time. Timestamp of the stats and anomalies always refers to end_time. Raw stats and anomalies are stored in stats_uri or anomaly_uri in the tensorflow defined protos. Field data_stats contains almost identical information with the raw stats in Vertex AI defined proto, for UI to display.

FilterSplit

Assigns input data to training, validation, and test sets based on the given filters, data pieces not matched by any filter are ignored. Currently only supported for Datasets containing DataItems. If any of the filters in this message are to match nothing, then they can be set as '-' (the minus sign).

Supported only for unstructured Datasets.

FractionSplit

Assigns the input data to training, validation, and test sets as per the given fractions. Any of training_fraction, validation_fraction and test_fraction may optionally be provided, they must sum to up to 1. If the provided ones sum to less than 1, the remainder is assigned to sets as decided by Vertex AI. If none of the fractions are set, by default roughly 80% of data is used for training, 10% for validation, and 10% for test.

GcsDestination

The Google Cloud Storage location where the output is to be written to.

GcsSource

The Google Cloud Storage location for the input content. .. attribute:: uris

Required. Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.

:type: Sequence[str]

GenericOperationMetadata

Generic Metadata shared by all operations. .. attribute:: partial_failures

Output only. Partial failures encountered. E.g. single files that couldn't be read. This field should never exceed 20 entries. Status details field will contain standard GCP error details.

:type: Sequence[google.rpc.status_pb2.Status]

GetAnnotationSpecRequest

Request message for DatasetService.GetAnnotationSpec.

GetBatchPredictionJobRequest

Request message for JobService.GetBatchPredictionJob.

GetCustomJobRequest

Request message for JobService.GetCustomJob.

GetDataLabelingJobRequest

Request message for JobService.GetDataLabelingJob.

GetDatasetRequest

Request message for DatasetService.GetDataset.

GetEndpointRequest

Request message for EndpointService.GetEndpoint

GetHyperparameterTuningJobRequest

Request message for JobService.GetHyperparameterTuningJob.

GetIndexEndpointRequest

Request message for IndexEndpointService.GetIndexEndpoint

GetIndexRequest

Request message for IndexService.GetIndex

GetModelDeploymentMonitoringJobRequest

Request message for JobService.GetModelDeploymentMonitoringJob.

GetModelEvaluationRequest

Request message for ModelService.GetModelEvaluation.

GetModelEvaluationSliceRequest

Request message for ModelService.GetModelEvaluationSlice.

GetModelRequest

Request message for ModelService.GetModel.

GetPipelineJobRequest

Request message for PipelineService.GetPipelineJob.

GetSpecialistPoolRequest

Request message for SpecialistPoolService.GetSpecialistPool.

GetStudyRequest

Request message for VizierService.GetStudy.

GetTrainingPipelineRequest

Request message for PipelineService.GetTrainingPipeline.

GetTrialRequest

Request message for VizierService.GetTrial.

HyperparameterTuningJob

Represents a HyperparameterTuningJob. A HyperparameterTuningJob has a Study specification and multiple CustomJobs with identical CustomJob specification.

ImportDataConfig

Describes the location from where we import data into a Dataset, together with the labels that will be applied to the DataItems and the Annotations.

ImportDataOperationMetadata

Runtime operation information for DatasetService.ImportData.

ImportDataRequest

Request message for DatasetService.ImportData.

ImportDataResponse

Response message for DatasetService.ImportData.

Index

A representation of a collection of database items organized in a way that allows for approximate nearest neighbor (a.k.a ANN) algorithms search.

IndexEndpoint

Indexes are deployed into it. An IndexEndpoint can have multiple DeployedIndexes.

IndexPrivateEndpoints

IndexPrivateEndpoints proto is used to provide paths for users to send requests via private services access.

InputDataConfig

Specifies Vertex AI owned input data to be used for training, and possibly evaluating, the Model.

IntegratedGradientsAttribution

An attribution method that computes the Aumann-Shapley value taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365

ListAnnotationsRequest

Request message for DatasetService.ListAnnotations.

ListAnnotationsResponse

Response message for DatasetService.ListAnnotations.

ListBatchPredictionJobsRequest

Request message for JobService.ListBatchPredictionJobs.

ListBatchPredictionJobsResponse

Response message for JobService.ListBatchPredictionJobs

ListCustomJobsRequest

Request message for JobService.ListCustomJobs.

ListCustomJobsResponse

Response message for JobService.ListCustomJobs

ListDataItemsRequest

Request message for DatasetService.ListDataItems.

ListDataItemsResponse

Response message for DatasetService.ListDataItems.

ListDataLabelingJobsRequest

Request message for JobService.ListDataLabelingJobs.

ListDataLabelingJobsResponse

Response message for JobService.ListDataLabelingJobs.

ListDatasetsRequest

Request message for DatasetService.ListDatasets.

ListDatasetsResponse

Response message for DatasetService.ListDatasets.

ListEndpointsRequest

Request message for EndpointService.ListEndpoints.

ListEndpointsResponse

Response message for EndpointService.ListEndpoints.

ListHyperparameterTuningJobsRequest

Request message for JobService.ListHyperparameterTuningJobs.

ListHyperparameterTuningJobsResponse

Response message for JobService.ListHyperparameterTuningJobs

ListIndexEndpointsRequest

Request message for IndexEndpointService.ListIndexEndpoints.

ListIndexEndpointsResponse

Response message for IndexEndpointService.ListIndexEndpoints.

ListIndexesRequest

Request message for IndexService.ListIndexes.

ListIndexesResponse

Response message for IndexService.ListIndexes.

ListModelDeploymentMonitoringJobsRequest

Request message for JobService.ListModelDeploymentMonitoringJobs.

ListModelDeploymentMonitoringJobsResponse

Response message for JobService.ListModelDeploymentMonitoringJobs.

ListModelEvaluationSlicesRequest

Request message for ModelService.ListModelEvaluationSlices.

ListModelEvaluationSlicesResponse

Response message for ModelService.ListModelEvaluationSlices.

ListModelEvaluationsRequest

Request message for ModelService.ListModelEvaluations.

ListModelEvaluationsResponse

Response message for ModelService.ListModelEvaluations.

ListModelsRequest

Request message for ModelService.ListModels.

ListModelsResponse

Response message for ModelService.ListModels

ListOptimalTrialsRequest

Request message for VizierService.ListOptimalTrials.

ListOptimalTrialsResponse

Response message for VizierService.ListOptimalTrials.

ListPipelineJobsRequest

Request message for PipelineService.ListPipelineJobs.

ListPipelineJobsResponse

Response message for PipelineService.ListPipelineJobs

ListSpecialistPoolsRequest

Request message for SpecialistPoolService.ListSpecialistPools.

ListSpecialistPoolsResponse

Response message for SpecialistPoolService.ListSpecialistPools.

ListStudiesRequest

Request message for VizierService.ListStudies.

ListStudiesResponse

Response message for VizierService.ListStudies.

ListTrainingPipelinesRequest

Request message for PipelineService.ListTrainingPipelines.

ListTrainingPipelinesResponse

Response message for PipelineService.ListTrainingPipelines

ListTrialsRequest

Request message for VizierService.ListTrials.

ListTrialsResponse

Response message for VizierService.ListTrials.

LookupStudyRequest

Request message for VizierService.LookupStudy.

MachineSpec

Specification of a single machine. .. attribute:: machine_type

Immutable. The type of the machine.

See the list of machine types supported for prediction <https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types>__

See the list of machine types supported for custom training <https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types>__.

For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.

:type: str

ManualBatchTuningParameters

Manual batch tuning parameters. .. attribute:: batch_size

Immutable. The number of the records (e.g. instances) of the operation given in each batch to a machine replica. Machine type, and size of a single record should be considered when setting this parameter, higher value speeds up the batch operation's execution, but too high value will result in a whole batch not fitting in a machine's memory, and the whole operation will fail. The default value is 4.

:type: int

Measurement

A message representing a Measurement of a Trial. A Measurement contains the Metrics got by executing a Trial using suggested hyperparameter values.

MigratableResource

Represents one resource that exists in automl.googleapis.com, datalabeling.googleapis.com or ml.googleapis.com.

MigrateResourceRequest

Config of migrating one resource from automl.googleapis.com, datalabeling.googleapis.com and ml.googleapis.com to Vertex AI.

MigrateResourceResponse

Describes a successfully migrated resource. .. attribute:: dataset

Migrated Dataset's resource name.

:type: str

Model

A trained machine learning Model. .. attribute:: name

The resource name of the Model.

:type: str

ModelContainerSpec

Specification of a container for serving predictions. Some fields in this message correspond to fields in the Kubernetes Container v1 core specification <https://v1-18.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#container-v1-core>__.

ModelDeploymentMonitoringBigQueryTable

ModelDeploymentMonitoringBigQueryTable specifies the BigQuery table name as well as some information of the logs stored in this table.

ModelDeploymentMonitoringJob

Represents a job that runs periodically to monitor the deployed models in an endpoint. It will analyze the logged training & prediction data to detect any abnormal behaviors.

ModelDeploymentMonitoringObjectiveConfig

ModelDeploymentMonitoringObjectiveConfig contains the pair of deployed_model_id to ModelMonitoringObjectiveConfig.

ModelDeploymentMonitoringObjectiveType

The Model Monitoring Objective types.

ModelDeploymentMonitoringScheduleConfig

The config for scheduling monitoring job. .. attribute:: monitor_interval

Required. The model monitoring job running interval. It will be rounded up to next full hour.

:type: google.protobuf.duration_pb2.Duration

ModelEvaluation

A collection of metrics calculated by comparing Model's predictions on all of the test data against annotations from the test data.

ModelEvaluationSlice

A collection of metrics calculated by comparing Model's predictions on a slice of the test data against ground truth annotations.

ModelExplanation

Aggregated explanation metrics for a Model over a set of instances.

ModelMonitoringAlertConfig

Next ID: 2 .. attribute:: email_alert_config

Email alert config.

:type: google.cloud.aiplatform_v1.types.ModelMonitoringAlertConfig.EmailAlertConfig

ModelMonitoringObjectiveConfig

Next ID: 6 .. attribute:: training_dataset

Training dataset for models. This field has to be set only if TrainingPredictionSkewDetectionConfig is specified.

:type: google.cloud.aiplatform_v1.types.ModelMonitoringObjectiveConfig.TrainingDataset

ModelMonitoringStatsAnomalies

Statistics and anomalies generated by Model Monitoring. .. attribute:: objective

Model Monitoring Objective those stats and anomalies belonging to.

:type: google.cloud.aiplatform_v1.types.ModelDeploymentMonitoringObjectiveType

NearestNeighborSearchOperationMetadata

Runtime operation metadata with regard to Matching Engine Index.

PauseModelDeploymentMonitoringJobRequest

Request message for JobService.PauseModelDeploymentMonitoringJob.

PipelineJob

An instance of a machine learning PipelineJob. .. attribute:: name

Output only. The resource name of the PipelineJob.

:type: str

PipelineJobDetail

The runtime detail of PipelineJob. .. attribute:: pipeline_context

Output only. The context of the pipeline.

:type: google.cloud.aiplatform_v1.types.Context

PipelineTaskDetail

The runtime detail of a task execution. .. attribute:: task_id

Output only. The system generated ID of the task.

:type: int

PipelineTaskExecutorDetail

The runtime detail of a pipeline executor. .. attribute:: container_detail

Output only. The detailed info for a container executor.

:type: google.cloud.aiplatform_v1.types.PipelineTaskExecutorDetail.ContainerDetail

Port

Represents a network port in a container. .. attribute:: container_port

The number of the port to expose on the pod's IP address. Must be a valid port number, between 1 and 65535 inclusive.

:type: int

PredefinedSplit

Assigns input data to training, validation, and test sets based on the value of a provided key.

Supported only for tabular Datasets.

PredictRequest

Request message for PredictionService.Predict.

PredictResponse

Response message for PredictionService.Predict.

PredictSchemata

Contains the schemata used in Model's predictions and explanations via PredictionService.Predict, PredictionService.Explain and BatchPredictionJob.

PythonPackageSpec

The spec of a Python packaged code. .. attribute:: executor_image_uri

Required. The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training <https://cloud.google.com/vertex-ai/docs/training/pre-built-containers>__. You must use an image from this list.

:type: str

RawPredictRequest

Request message for PredictionService.RawPredict.

ResourcesConsumed

Statistics information about resource consumption. .. attribute:: replica_hours

Output only. The number of replica hours used. Note that many replicas may run in parallel, and additionally any given work may be queued for some time. Therefore this value is not strictly related to wall time.

:type: float

ResumeModelDeploymentMonitoringJobRequest

Request message for JobService.ResumeModelDeploymentMonitoringJob.

SampleConfig

Active learning data sampling config. For every active learning labeling iteration, it will select a batch of data based on the sampling strategy.

SampledShapleyAttribution

An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features.

SamplingStrategy

Sampling Strategy for logging, can be for both training and prediction dataset. Next ID: 2

Scheduling

All parameters related to queuing and scheduling of custom jobs.

SearchMigratableResourcesRequest

Request message for MigrationService.SearchMigratableResources.

SearchMigratableResourcesResponse

Response message for MigrationService.SearchMigratableResources.

SearchModelDeploymentMonitoringStatsAnomaliesRequest

Request message for JobService.SearchModelDeploymentMonitoringStatsAnomalies.

SearchModelDeploymentMonitoringStatsAnomaliesResponse

Response message for JobService.SearchModelDeploymentMonitoringStatsAnomalies.

SmoothGradConfig

Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf

SpecialistPool

SpecialistPool represents customers' own workforce to work on their data labeling jobs. It includes a group of specialist managers and workers. Managers are responsible for managing the workers in this pool as well as customers' data labeling jobs associated with this pool. Customers create specialist pool as well as start data labeling jobs on Cloud, managers and workers handle the jobs using CrowdCompute console.

StopTrialRequest

Request message for VizierService.StopTrial.

Study

A message representing a Study. .. attribute:: name

Output only. The name of a study. The study's globally unique identifier. Format: projects/{project}/locations/{location}/studies/{study}

:type: str

StudySpec

Represents specification of a Study. .. attribute:: decay_curve_stopping_spec

The automated early stopping spec using decay curve rule.

:type: google.cloud.aiplatform_v1.types.StudySpec.DecayCurveAutomatedStoppingSpec

SuggestTrialsMetadata

Details of operations that perform Trials suggestion. .. attribute:: generic_metadata

Operation metadata for suggesting Trials.

:type: google.cloud.aiplatform_v1.types.GenericOperationMetadata

SuggestTrialsRequest

Request message for VizierService.SuggestTrials.

SuggestTrialsResponse

Response message for VizierService.SuggestTrials.

ThresholdConfig

The config for feature monitoring threshold. Next ID: 3

TimestampSplit

Assigns input data to training, validation, and test sets based on a provided timestamps. The youngest data pieces are assigned to training set, next to validation set, and the oldest to the test set. Supported only for tabular Datasets.

TrainingConfig

CMLE training config. For every active learning labeling iteration, system will train a machine learning model on CMLE. The trained model will be used by data sampling algorithm to select DataItems.

TrainingPipeline

The TrainingPipeline orchestrates tasks associated with training a Model. It always executes the training task, and optionally may also export data from Vertex AI's Dataset which becomes the training input, upload the Model to Vertex AI, and evaluate the Model.

Trial

A message representing a Trial. A Trial contains a unique set of Parameters that has been or will be evaluated, along with the objective metrics got by running the Trial.

UndeployIndexOperationMetadata

Runtime operation information for IndexEndpointService.UndeployIndex.

UndeployIndexRequest

Request message for IndexEndpointService.UndeployIndex.

UndeployIndexResponse

Response message for IndexEndpointService.UndeployIndex.

UndeployModelOperationMetadata

Runtime operation information for EndpointService.UndeployModel.

UndeployModelRequest

Request message for EndpointService.UndeployModel.

UndeployModelResponse

Response message for EndpointService.UndeployModel.

UpdateDatasetRequest

Request message for DatasetService.UpdateDataset.

UpdateEndpointRequest

Request message for EndpointService.UpdateEndpoint.

UpdateIndexEndpointRequest

Request message for IndexEndpointService.UpdateIndexEndpoint.

UpdateIndexOperationMetadata

Runtime operation information for IndexService.UpdateIndex.

UpdateIndexRequest

Request message for IndexService.UpdateIndex.

UpdateModelDeploymentMonitoringJobOperationMetadata

Runtime operation information for JobService.UpdateModelDeploymentMonitoringJob.

UpdateModelDeploymentMonitoringJobRequest

Request message for JobService.UpdateModelDeploymentMonitoringJob.

UpdateModelRequest

Request message for ModelService.UpdateModel.

UpdateSpecialistPoolOperationMetadata

Runtime operation metadata for SpecialistPoolService.UpdateSpecialistPool.

UpdateSpecialistPoolRequest

Request message for SpecialistPoolService.UpdateSpecialistPool.

UploadModelOperationMetadata

Details of ModelService.UploadModel operation.

UploadModelRequest

Request message for ModelService.UploadModel.

UploadModelResponse

Response message of ModelService.UploadModel operation.

UserActionReference

References an API call. It contains more information about long running operation and Jobs that are triggered by the API call.

Value

Value is the value of the field. .. attribute:: int_value

An integer value.

:type: int

WorkerPoolSpec

Represents the spec of a worker pool in a job. .. attribute:: container_spec

The custom container task.

:type: google.cloud.aiplatform_v1.types.ContainerSpec

XraiAttribution

An explanation method that redistributes Integrated Gradients attributions to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825

Supported only by image Models.