Index
DatasetService
(interface)DeploymentResourcePoolService
(interface)EndpointService
(interface)EvaluationService
(interface)FeatureOnlineStoreAdminService
(interface)FeatureOnlineStoreService
(interface)FeatureRegistryService
(interface)FeaturestoreOnlineServingService
(interface)FeaturestoreService
(interface)GenAiTuningService
(interface)IndexEndpointService
(interface)IndexService
(interface)JobService
(interface)LlmUtilityService
(interface)MatchService
(interface)MetadataService
(interface)MigrationService
(interface)ModelGardenService
(interface)ModelService
(interface)NotebookService
(interface)PersistentResourceService
(interface)PipelineService
(interface)PredictionService
(interface)ScheduleService
(interface)SpecialistPoolService
(interface)TensorboardService
(interface)VizierService
(interface)AcceleratorType
(enum)AddContextArtifactsAndExecutionsRequest
(message)AddContextArtifactsAndExecutionsResponse
(message)AddContextChildrenRequest
(message)AddContextChildrenResponse
(message)AddExecutionEventsRequest
(message)AddExecutionEventsResponse
(message)AddTrialMeasurementRequest
(message)Annotation
(message)AnnotationSpec
(message)Artifact
(message)Artifact.State
(enum)AssignNotebookRuntimeOperationMetadata
(message)AssignNotebookRuntimeRequest
(message)Attribution
(message)AutomaticResources
(message)AutoscalingMetricSpec
(message)AvroSource
(message)BatchCancelPipelineJobsOperationMetadata
(message)BatchCancelPipelineJobsRequest
(message)BatchCancelPipelineJobsResponse
(message)BatchCreateFeaturesOperationMetadata
(message)BatchCreateFeaturesRequest
(message)BatchCreateFeaturesResponse
(message)BatchCreateTensorboardRunsRequest
(message)BatchCreateTensorboardRunsResponse
(message)BatchCreateTensorboardTimeSeriesRequest
(message)BatchCreateTensorboardTimeSeriesResponse
(message)BatchDedicatedResources
(message)BatchDeletePipelineJobsRequest
(message)BatchDeletePipelineJobsResponse
(message)BatchImportEvaluatedAnnotationsRequest
(message)BatchImportEvaluatedAnnotationsResponse
(message)BatchImportModelEvaluationSlicesRequest
(message)BatchImportModelEvaluationSlicesResponse
(message)BatchMigrateResourcesOperationMetadata
(message)BatchMigrateResourcesOperationMetadata.PartialResult
(message)BatchMigrateResourcesRequest
(message)BatchMigrateResourcesResponse
(message)BatchPredictionJob
(message)BatchPredictionJob.InputConfig
(message)BatchPredictionJob.InstanceConfig
(message)BatchPredictionJob.OutputConfig
(message)BatchPredictionJob.OutputInfo
(message)BatchReadFeatureValuesOperationMetadata
(message)BatchReadFeatureValuesRequest
(message)BatchReadFeatureValuesRequest.EntityTypeSpec
(message)BatchReadFeatureValuesRequest.PassThroughField
(message)BatchReadFeatureValuesResponse
(message)BatchReadTensorboardTimeSeriesDataRequest
(message)BatchReadTensorboardTimeSeriesDataResponse
(message)BigQueryDestination
(message)BigQuerySource
(message)BleuInput
(message)BleuInstance
(message)BleuMetricValue
(message)BleuResults
(message)BleuSpec
(message)Blob
(message)BlurBaselineConfig
(message)BoolArray
(message)CancelBatchPredictionJobRequest
(message)CancelCustomJobRequest
(message)CancelHyperparameterTuningJobRequest
(message)CancelNasJobRequest
(message)CancelPipelineJobRequest
(message)CancelTrainingPipelineRequest
(message)CancelTuningJobRequest
(message)Candidate
(message)Candidate.FinishReason
(enum)CheckTrialEarlyStoppingStateMetatdata
(message)CheckTrialEarlyStoppingStateRequest
(message)CheckTrialEarlyStoppingStateResponse
(message)Citation
(message)CitationMetadata
(message)CoherenceInput
(message)CoherenceInstance
(message)CoherenceResult
(message)CoherenceSpec
(message)CompleteTrialRequest
(message)CompletionStats
(message)ComputeTokensRequest
(message)ComputeTokensResponse
(message)ContainerRegistryDestination
(message)ContainerSpec
(message)Content
(message)Context
(message)CopyModelOperationMetadata
(message)CopyModelRequest
(message)CopyModelResponse
(message)CountTokensRequest
(message)CountTokensResponse
(message)CreateArtifactRequest
(message)CreateBatchPredictionJobRequest
(message)CreateContextRequest
(message)CreateCustomJobRequest
(message)CreateDatasetOperationMetadata
(message)CreateDatasetRequest
(message)CreateDatasetVersionOperationMetadata
(message)CreateDatasetVersionRequest
(message)CreateDeploymentResourcePoolOperationMetadata
(message)CreateDeploymentResourcePoolRequest
(message)CreateEndpointOperationMetadata
(message)CreateEndpointRequest
(message)CreateEntityTypeOperationMetadata
(message)CreateEntityTypeRequest
(message)CreateExecutionRequest
(message)CreateFeatureGroupOperationMetadata
(message)CreateFeatureGroupRequest
(message)CreateFeatureOnlineStoreOperationMetadata
(message)CreateFeatureOnlineStoreRequest
(message)CreateFeatureOperationMetadata
(message)CreateFeatureRequest
(message)CreateFeatureViewOperationMetadata
(message)CreateFeatureViewRequest
(message)CreateFeaturestoreOperationMetadata
(message)CreateFeaturestoreRequest
(message)CreateHyperparameterTuningJobRequest
(message)CreateIndexEndpointOperationMetadata
(message)CreateIndexEndpointRequest
(message)CreateIndexOperationMetadata
(message)CreateIndexRequest
(message)CreateMetadataSchemaRequest
(message)CreateMetadataStoreOperationMetadata
(message)CreateMetadataStoreRequest
(message)CreateModelDeploymentMonitoringJobRequest
(message)CreateNasJobRequest
(message)CreateNotebookExecutionJobOperationMetadata
(message)CreateNotebookExecutionJobRequest
(message)CreateNotebookRuntimeTemplateOperationMetadata
(message)CreateNotebookRuntimeTemplateRequest
(message)CreatePersistentResourceOperationMetadata
(message)CreatePersistentResourceRequest
(message)CreatePipelineJobRequest
(message)CreateRegistryFeatureOperationMetadata
(message)CreateScheduleRequest
(message)CreateSpecialistPoolOperationMetadata
(message)CreateSpecialistPoolRequest
(message)CreateStudyRequest
(message)CreateTensorboardExperimentRequest
(message)CreateTensorboardOperationMetadata
(message)CreateTensorboardRequest
(message)CreateTensorboardRunRequest
(message)CreateTensorboardTimeSeriesRequest
(message)CreateTrainingPipelineRequest
(message)CreateTrialRequest
(message)CreateTuningJobRequest
(message)CsvDestination
(message)CsvSource
(message)CustomJob
(message)CustomJobSpec
(message)DataItem
(message)DataItemView
(message)Dataset
(message)DatasetVersion
(message)DedicatedResources
(message)DeleteArtifactRequest
(message)DeleteBatchPredictionJobRequest
(message)DeleteContextRequest
(message)DeleteCustomJobRequest
(message)DeleteDatasetRequest
(message)DeleteDatasetVersionRequest
(message)DeleteDeploymentResourcePoolRequest
(message)DeleteEndpointRequest
(message)DeleteEntityTypeRequest
(message)DeleteExecutionRequest
(message)DeleteFeatureGroupRequest
(message)DeleteFeatureOnlineStoreRequest
(message)DeleteFeatureRequest
(message)DeleteFeatureValuesOperationMetadata
(message)DeleteFeatureValuesRequest
(message)DeleteFeatureValuesRequest.SelectEntity
(message)DeleteFeatureValuesRequest.SelectTimeRangeAndFeature
(message)DeleteFeatureValuesResponse
(message)DeleteFeatureValuesResponse.SelectEntity
(message)DeleteFeatureValuesResponse.SelectTimeRangeAndFeature
(message)DeleteFeatureViewRequest
(message)DeleteFeaturestoreRequest
(message)DeleteHyperparameterTuningJobRequest
(message)DeleteIndexEndpointRequest
(message)DeleteIndexRequest
(message)DeleteMetadataStoreOperationMetadata
(message)DeleteMetadataStoreRequest
(message)DeleteModelDeploymentMonitoringJobRequest
(message)DeleteModelRequest
(message)DeleteModelVersionRequest
(message)DeleteNasJobRequest
(message)DeleteNotebookExecutionJobRequest
(message)DeleteNotebookRuntimeRequest
(message)DeleteNotebookRuntimeTemplateRequest
(message)DeleteOperationMetadata
(message)DeletePersistentResourceRequest
(message)DeletePipelineJobRequest
(message)DeleteSavedQueryRequest
(message)DeleteScheduleRequest
(message)DeleteSpecialistPoolRequest
(message)DeleteStudyRequest
(message)DeleteTensorboardExperimentRequest
(message)DeleteTensorboardRequest
(message)DeleteTensorboardRunRequest
(message)DeleteTensorboardTimeSeriesRequest
(message)DeleteTrainingPipelineRequest
(message)DeleteTrialRequest
(message)DeployIndexOperationMetadata
(message)DeployIndexRequest
(message)DeployIndexResponse
(message)DeployModelOperationMetadata
(message)DeployModelRequest
(message)DeployModelResponse
(message)DeployedIndex
(message)DeployedIndexAuthConfig
(message)DeployedIndexAuthConfig.AuthProvider
(message)DeployedIndexRef
(message)DeployedModel
(message)DeployedModelRef
(message)DeploymentResourcePool
(message)DestinationFeatureSetting
(message)DirectPredictRequest
(message)DirectPredictResponse
(message)DirectRawPredictRequest
(message)DirectRawPredictResponse
(message)DiskSpec
(message)DoubleArray
(message)DynamicRetrievalConfig
(message)DynamicRetrievalConfig.Mode
(enum)EncryptionSpec
(message)Endpoint
(message)EntityIdSelector
(message)EntityType
(message)EnvVar
(message)ErrorAnalysisAnnotation
(message)ErrorAnalysisAnnotation.AttributedItem
(message)ErrorAnalysisAnnotation.QueryType
(enum)EvaluateInstancesRequest
(message)EvaluateInstancesResponse
(message)EvaluatedAnnotation
(message)EvaluatedAnnotation.EvaluatedAnnotationType
(enum)EvaluatedAnnotationExplanation
(message)Event
(message)Event.Type
(enum)ExactMatchInput
(message)ExactMatchInstance
(message)ExactMatchMetricValue
(message)ExactMatchResults
(message)ExactMatchSpec
(message)Examples
(message)Examples.ExampleGcsSource
(message)Examples.ExampleGcsSource.DataFormat
(enum)ExamplesOverride
(message)ExamplesOverride.DataFormat
(enum)ExamplesRestrictionsNamespace
(message)Execution
(message)Execution.State
(enum)ExplainRequest
(message)ExplainResponse
(message)Explanation
(message)ExplanationMetadata
(message)ExplanationMetadata.InputMetadata
(message)ExplanationMetadata.InputMetadata.Encoding
(enum)ExplanationMetadata.InputMetadata.FeatureValueDomain
(message)ExplanationMetadata.InputMetadata.Visualization
(message)ExplanationMetadata.InputMetadata.Visualization.ColorMap
(enum)ExplanationMetadata.InputMetadata.Visualization.OverlayType
(enum)ExplanationMetadata.InputMetadata.Visualization.Polarity
(enum)ExplanationMetadata.InputMetadata.Visualization.Type
(enum)ExplanationMetadata.OutputMetadata
(message)ExplanationMetadataOverride
(message)ExplanationMetadataOverride.InputMetadataOverride
(message)ExplanationParameters
(message)ExplanationSpec
(message)ExplanationSpecOverride
(message)ExportDataConfig
(message)ExportDataConfig.ExportUse
(enum)ExportDataOperationMetadata
(message)ExportDataRequest
(message)ExportDataResponse
(message)ExportFeatureValuesOperationMetadata
(message)ExportFeatureValuesRequest
(message)ExportFeatureValuesRequest.FullExport
(message)ExportFeatureValuesRequest.SnapshotExport
(message)ExportFeatureValuesResponse
(message)ExportFilterSplit
(message)ExportFractionSplit
(message)ExportModelOperationMetadata
(message)ExportModelOperationMetadata.OutputInfo
(message)ExportModelRequest
(message)ExportModelRequest.OutputConfig
(message)ExportModelResponse
(message)ExportTensorboardTimeSeriesDataRequest
(message)ExportTensorboardTimeSeriesDataResponse
(message)Feature
(message)Feature.MonitoringStatsAnomaly
(message)Feature.MonitoringStatsAnomaly.Objective
(enum)Feature.ValueType
(enum)FeatureGroup
(message)FeatureGroup.BigQuery
(message)FeatureNoiseSigma
(message)FeatureNoiseSigma.NoiseSigmaForFeature
(message)FeatureOnlineStore
(message)FeatureOnlineStore.Bigtable
(message)FeatureOnlineStore.Bigtable.AutoScaling
(message)FeatureOnlineStore.DedicatedServingEndpoint
(message)FeatureOnlineStore.Optimized
(message)FeatureOnlineStore.State
(enum)FeatureSelector
(message)FeatureStatsAnomaly
(message)FeatureValue
(message)FeatureValue.Metadata
(message)FeatureValueDestination
(message)FeatureValueList
(message)FeatureView
(message)FeatureView.BigQuerySource
(message)FeatureView.FeatureRegistrySource
(message)FeatureView.FeatureRegistrySource.FeatureGroup
(message)FeatureView.IndexConfig
(message)FeatureView.IndexConfig.BruteForceConfig
(message)FeatureView.IndexConfig.DistanceMeasureType
(enum)FeatureView.IndexConfig.TreeAHConfig
(message)FeatureView.SyncConfig
(message)FeatureView.VertexRagSource
(message)FeatureViewDataFormat
(enum)FeatureViewDataKey
(message)FeatureViewDataKey.CompositeKey
(message)FeatureViewSync
(message)FeatureViewSync.SyncSummary
(message)Featurestore
(message)Featurestore.OnlineServingConfig
(message)Featurestore.OnlineServingConfig.Scaling
(message)Featurestore.State
(enum)FeaturestoreMonitoringConfig
(message)FeaturestoreMonitoringConfig.ImportFeaturesAnalysis
(message)FeaturestoreMonitoringConfig.ImportFeaturesAnalysis.Baseline
(enum)FeaturestoreMonitoringConfig.ImportFeaturesAnalysis.State
(enum)FeaturestoreMonitoringConfig.SnapshotAnalysis
(message)FeaturestoreMonitoringConfig.ThresholdConfig
(message)FetchFeatureValuesRequest
(message)FetchFeatureValuesResponse
(message)FetchFeatureValuesResponse.FeatureNameValuePairList
(message)FetchFeatureValuesResponse.FeatureNameValuePairList.FeatureNameValuePair
(message)FileData
(message)FilterSplit
(message)FluencyInput
(message)FluencyInstance
(message)FluencyResult
(message)FluencySpec
(message)FractionSplit
(message)FulfillmentInput
(message)FulfillmentInstance
(message)FulfillmentResult
(message)FulfillmentSpec
(message)FunctionCall
(message)FunctionCallingConfig
(message)FunctionCallingConfig.Mode
(enum)FunctionDeclaration
(message)FunctionResponse
(message)GcsDestination
(message)GcsSource
(message)GenerateContentRequest
(message)GenerateContentResponse
(message)GenerateContentResponse.PromptFeedback
(message)GenerateContentResponse.PromptFeedback.BlockedReason
(enum)GenerateContentResponse.UsageMetadata
(message)GenerationConfig
(message)GenericOperationMetadata
(message)GenieSource
(message)GetAnnotationSpecRequest
(message)GetArtifactRequest
(message)GetBatchPredictionJobRequest
(message)GetContextRequest
(message)GetCustomJobRequest
(message)GetDatasetRequest
(message)GetDatasetVersionRequest
(message)GetDeploymentResourcePoolRequest
(message)GetEndpointRequest
(message)GetEntityTypeRequest
(message)GetExecutionRequest
(message)GetFeatureGroupRequest
(message)GetFeatureOnlineStoreRequest
(message)GetFeatureRequest
(message)GetFeatureViewRequest
(message)GetFeatureViewSyncRequest
(message)GetFeaturestoreRequest
(message)GetHyperparameterTuningJobRequest
(message)GetIndexEndpointRequest
(message)GetIndexRequest
(message)GetMetadataSchemaRequest
(message)GetMetadataStoreRequest
(message)GetModelDeploymentMonitoringJobRequest
(message)GetModelEvaluationRequest
(message)GetModelEvaluationSliceRequest
(message)GetModelRequest
(message)GetNasJobRequest
(message)GetNasTrialDetailRequest
(message)GetNotebookExecutionJobRequest
(message)GetNotebookRuntimeRequest
(message)GetNotebookRuntimeTemplateRequest
(message)GetPersistentResourceRequest
(message)GetPipelineJobRequest
(message)GetPublisherModelRequest
(message)GetScheduleRequest
(message)GetSpecialistPoolRequest
(message)GetStudyRequest
(message)GetTensorboardExperimentRequest
(message)GetTensorboardRequest
(message)GetTensorboardRunRequest
(message)GetTensorboardTimeSeriesRequest
(message)GetTrainingPipelineRequest
(message)GetTrialRequest
(message)GetTuningJobRequest
(message)GoogleSearchRetrieval
(message)GroundednessInput
(message)GroundednessInstance
(message)GroundednessResult
(message)GroundednessSpec
(message)GroundingChunk
(message)GroundingChunk.RetrievedContext
(message)GroundingChunk.Web
(message)GroundingMetadata
(message)GroundingSupport
(message)HarmCategory
(enum)HyperparameterTuningJob
(message)IdMatcher
(message)ImportDataConfig
(message)ImportDataOperationMetadata
(message)ImportDataRequest
(message)ImportDataResponse
(message)ImportFeatureValuesOperationMetadata
(message)ImportFeatureValuesRequest
(message)ImportFeatureValuesRequest.FeatureSpec
(message)ImportFeatureValuesResponse
(message)ImportModelEvaluationRequest
(message)Index
(message)Index.IndexUpdateMethod
(enum)IndexDatapoint
(message)IndexDatapoint.CrowdingTag
(message)IndexDatapoint.NumericRestriction
(message)IndexDatapoint.NumericRestriction.Operator
(enum)IndexDatapoint.Restriction
(message)IndexDatapoint.SparseEmbedding
(message)IndexEndpoint
(message)IndexPrivateEndpoints
(message)IndexStats
(message)InputDataConfig
(message)Int64Array
(message)IntegratedGradientsAttribution
(message)JobState
(enum)LargeModelReference
(message)LineageSubgraph
(message)ListAnnotationsRequest
(message)ListAnnotationsResponse
(message)ListArtifactsRequest
(message)ListArtifactsResponse
(message)ListBatchPredictionJobsRequest
(message)ListBatchPredictionJobsResponse
(message)ListContextsRequest
(message)ListContextsResponse
(message)ListCustomJobsRequest
(message)ListCustomJobsResponse
(message)ListDataItemsRequest
(message)ListDataItemsResponse
(message)ListDatasetVersionsRequest
(message)ListDatasetVersionsResponse
(message)ListDatasetsRequest
(message)ListDatasetsResponse
(message)ListDeploymentResourcePoolsRequest
(message)ListDeploymentResourcePoolsResponse
(message)ListEndpointsRequest
(message)ListEndpointsResponse
(message)ListEntityTypesRequest
(message)ListEntityTypesResponse
(message)ListExecutionsRequest
(message)ListExecutionsResponse
(message)ListFeatureGroupsRequest
(message)ListFeatureGroupsResponse
(message)ListFeatureOnlineStoresRequest
(message)ListFeatureOnlineStoresResponse
(message)ListFeatureViewSyncsRequest
(message)ListFeatureViewSyncsResponse
(message)ListFeatureViewsRequest
(message)ListFeatureViewsResponse
(message)ListFeaturesRequest
(message)ListFeaturesResponse
(message)ListFeaturestoresRequest
(message)ListFeaturestoresResponse
(message)ListHyperparameterTuningJobsRequest
(message)ListHyperparameterTuningJobsResponse
(message)ListIndexEndpointsRequest
(message)ListIndexEndpointsResponse
(message)ListIndexesRequest
(message)ListIndexesResponse
(message)ListMetadataSchemasRequest
(message)ListMetadataSchemasResponse
(message)ListMetadataStoresRequest
(message)ListMetadataStoresResponse
(message)ListModelDeploymentMonitoringJobsRequest
(message)ListModelDeploymentMonitoringJobsResponse
(message)ListModelEvaluationSlicesRequest
(message)ListModelEvaluationSlicesResponse
(message)ListModelEvaluationsRequest
(message)ListModelEvaluationsResponse
(message)ListModelVersionsRequest
(message)ListModelVersionsResponse
(message)ListModelsRequest
(message)ListModelsResponse
(message)ListNasJobsRequest
(message)ListNasJobsResponse
(message)ListNasTrialDetailsRequest
(message)ListNasTrialDetailsResponse
(message)ListNotebookExecutionJobsRequest
(message)ListNotebookExecutionJobsResponse
(message)ListNotebookRuntimeTemplatesRequest
(message)ListNotebookRuntimeTemplatesResponse
(message)ListNotebookRuntimesRequest
(message)ListNotebookRuntimesResponse
(message)ListOptimalTrialsRequest
(message)ListOptimalTrialsResponse
(message)ListPersistentResourcesRequest
(message)ListPersistentResourcesResponse
(message)ListPipelineJobsRequest
(message)ListPipelineJobsResponse
(message)ListSavedQueriesRequest
(message)ListSavedQueriesResponse
(message)ListSchedulesRequest
(message)ListSchedulesResponse
(message)ListSpecialistPoolsRequest
(message)ListSpecialistPoolsResponse
(message)ListStudiesRequest
(message)ListStudiesResponse
(message)ListTensorboardExperimentsRequest
(message)ListTensorboardExperimentsResponse
(message)ListTensorboardRunsRequest
(message)ListTensorboardRunsResponse
(message)ListTensorboardTimeSeriesRequest
(message)ListTensorboardTimeSeriesResponse
(message)ListTensorboardsRequest
(message)ListTensorboardsResponse
(message)ListTrainingPipelinesRequest
(message)ListTrainingPipelinesResponse
(message)ListTrialsRequest
(message)ListTrialsResponse
(message)ListTuningJobsRequest
(message)ListTuningJobsResponse
(message)LogprobsResult
(message)LogprobsResult.Candidate
(message)LogprobsResult.TopCandidates
(message)LookupStudyRequest
(message)MachineSpec
(message)ManualBatchTuningParameters
(message)Measurement
(message)Measurement.Metric
(message)MergeVersionAliasesRequest
(message)MetadataSchema
(message)MetadataSchema.MetadataSchemaType
(enum)MetadataStore
(message)MetadataStore.DataplexConfig
(message)MetadataStore.MetadataStoreState
(message)MigratableResource
(message)MigratableResource.AutomlDataset
(message)MigratableResource.AutomlModel
(message)MigratableResource.DataLabelingDataset
(message)MigratableResource.DataLabelingDataset.DataLabelingAnnotatedDataset
(message)MigratableResource.MlEngineModelVersion
(message)MigrateResourceRequest
(message)MigrateResourceRequest.MigrateAutomlDatasetConfig
(message)MigrateResourceRequest.MigrateAutomlModelConfig
(message)MigrateResourceRequest.MigrateDataLabelingDatasetConfig
(message)MigrateResourceRequest.MigrateDataLabelingDatasetConfig.MigrateDataLabelingAnnotatedDatasetConfig
(message)MigrateResourceRequest.MigrateMlEngineModelVersionConfig
(message)MigrateResourceResponse
(message)Model
(message)Model.BaseModelSource
(message)Model.DataStats
(message)Model.DeploymentResourcesType
(enum)Model.ExportFormat
(message)Model.ExportFormat.ExportableContent
(enum)Model.OriginalModelInfo
(message)ModelContainerSpec
(message)ModelDeploymentMonitoringBigQueryTable
(message)ModelDeploymentMonitoringBigQueryTable.LogSource
(enum)ModelDeploymentMonitoringBigQueryTable.LogType
(enum)ModelDeploymentMonitoringJob
(message)ModelDeploymentMonitoringJob.LatestMonitoringPipelineMetadata
(message)ModelDeploymentMonitoringJob.MonitoringScheduleState
(enum)ModelDeploymentMonitoringObjectiveConfig
(message)ModelDeploymentMonitoringObjectiveType
(enum)ModelDeploymentMonitoringScheduleConfig
(message)ModelEvaluation
(message)ModelEvaluation.ModelEvaluationExplanationSpec
(message)ModelEvaluationSlice
(message)ModelEvaluationSlice.Slice
(message)ModelEvaluationSlice.Slice.SliceSpec
(message)ModelEvaluationSlice.Slice.SliceSpec.Range
(message)ModelEvaluationSlice.Slice.SliceSpec.SliceConfig
(message)ModelEvaluationSlice.Slice.SliceSpec.Value
(message)ModelExplanation
(message)ModelGardenSource
(message)ModelMonitoringAlertConfig
(message)ModelMonitoringAlertConfig.EmailAlertConfig
(message)ModelMonitoringObjectiveConfig
(message)ModelMonitoringObjectiveConfig.ExplanationConfig
(message)ModelMonitoringObjectiveConfig.ExplanationConfig.ExplanationBaseline
(message)ModelMonitoringObjectiveConfig.ExplanationConfig.ExplanationBaseline.PredictionFormat
(enum)ModelMonitoringObjectiveConfig.PredictionDriftDetectionConfig
(message)ModelMonitoringObjectiveConfig.TrainingDataset
(message)ModelMonitoringObjectiveConfig.TrainingPredictionSkewDetectionConfig
(message)ModelMonitoringStatsAnomalies
(message)ModelMonitoringStatsAnomalies.FeatureHistoricStatsAnomalies
(message)ModelSourceInfo
(message)ModelSourceInfo.ModelSourceType
(enum)MutateDeployedIndexOperationMetadata
(message)MutateDeployedIndexRequest
(message)MutateDeployedIndexResponse
(message)MutateDeployedModelOperationMetadata
(message)MutateDeployedModelRequest
(message)MutateDeployedModelResponse
(message)NasJob
(message)NasJobOutput
(message)NasJobOutput.MultiTrialJobOutput
(message)NasJobSpec
(message)NasJobSpec.MultiTrialAlgorithmSpec
(message)NasJobSpec.MultiTrialAlgorithmSpec.MetricSpec
(message)NasJobSpec.MultiTrialAlgorithmSpec.MetricSpec.GoalType
(enum)NasJobSpec.MultiTrialAlgorithmSpec.MultiTrialAlgorithm
(enum)NasJobSpec.MultiTrialAlgorithmSpec.SearchTrialSpec
(message)NasJobSpec.MultiTrialAlgorithmSpec.TrainTrialSpec
(message)NasTrial
(message)NasTrial.State
(enum)NasTrialDetail
(message)NearestNeighborQuery
(message)NearestNeighborQuery.Embedding
(message)NearestNeighborQuery.NumericFilter
(message)NearestNeighborQuery.NumericFilter.Operator
(enum)NearestNeighborQuery.Parameters
(message)NearestNeighborQuery.StringFilter
(message)NearestNeighborSearchOperationMetadata
(message)NearestNeighborSearchOperationMetadata.ContentValidationStats
(message)NearestNeighborSearchOperationMetadata.RecordError
(message)NearestNeighborSearchOperationMetadata.RecordError.RecordErrorType
(enum)NearestNeighbors
(message)NearestNeighbors.Neighbor
(message)Neighbor
(message)NetworkSpec
(message)NfsMount
(message)NotebookEucConfig
(message)NotebookExecutionJob
(message)NotebookExecutionJob.DataformRepositorySource
(message)NotebookExecutionJob.DirectNotebookSource
(message)NotebookExecutionJob.GcsNotebookSource
(message)NotebookExecutionJobView
(enum)NotebookIdleShutdownConfig
(message)NotebookRuntime
(message)NotebookRuntime.HealthState
(enum)NotebookRuntime.RuntimeState
(enum)NotebookRuntimeTemplate
(message)NotebookRuntimeTemplateRef
(message)NotebookRuntimeType
(enum)PSCAutomationConfig
(message)PairwiseQuestionAnsweringQualityInput
(message)PairwiseQuestionAnsweringQualityInstance
(message)PairwiseQuestionAnsweringQualityResult
(message)PairwiseQuestionAnsweringQualitySpec
(message)PairwiseSummarizationQualityInput
(message)PairwiseSummarizationQualityInstance
(message)PairwiseSummarizationQualityResult
(message)PairwiseSummarizationQualitySpec
(message)Part
(message)PauseModelDeploymentMonitoringJobRequest
(message)PauseScheduleRequest
(message)PersistentDiskSpec
(message)PersistentResource
(message)PersistentResource.State
(enum)PipelineFailurePolicy
(enum)PipelineJob
(message)PipelineJob.RuntimeConfig
(message)PipelineJob.RuntimeConfig.InputArtifact
(message)PipelineJobDetail
(message)PipelineState
(enum)PipelineTaskDetail
(message)PipelineTaskDetail.ArtifactList
(message)PipelineTaskDetail.PipelineTaskStatus
(message)PipelineTaskDetail.State
(enum)PipelineTaskExecutorDetail
(message)PipelineTaskExecutorDetail.ContainerDetail
(message)PipelineTaskExecutorDetail.CustomJobDetail
(message)PipelineTemplateMetadata
(message)Port
(message)PredefinedSplit
(message)PredictRequest
(message)PredictRequestResponseLoggingConfig
(message)PredictResponse
(message)PredictSchemata
(message)Presets
(message)Presets.Modality
(enum)Presets.Query
(enum)PrivateEndpoints
(message)PrivateServiceConnectConfig
(message)Probe
(message)Probe.ExecAction
(message)PscAutomatedEndpoints
(message)PublisherModel
(message)PublisherModel.CallToAction
(message)PublisherModel.CallToAction.Deploy
(message)PublisherModel.CallToAction.Deploy.DeployMetadata
(message)PublisherModel.CallToAction.DeployGke
(message)PublisherModel.CallToAction.OpenFineTuningPipelines
(message)PublisherModel.CallToAction.OpenNotebooks
(message)PublisherModel.CallToAction.RegionalResourceReferences
(message)PublisherModel.CallToAction.ViewRestApi
(message)PublisherModel.Documentation
(message)PublisherModel.LaunchStage
(enum)PublisherModel.OpenSourceCategory
(enum)PublisherModel.ResourceReference
(message)PublisherModel.VersionState
(enum)PublisherModelView
(enum)PurgeArtifactsMetadata
(message)PurgeArtifactsRequest
(message)PurgeArtifactsResponse
(message)PurgeContextsMetadata
(message)PurgeContextsRequest
(message)PurgeContextsResponse
(message)PurgeExecutionsMetadata
(message)PurgeExecutionsRequest
(message)PurgeExecutionsResponse
(message)PythonPackageSpec
(message)QueryArtifactLineageSubgraphRequest
(message)QueryContextLineageSubgraphRequest
(message)QueryDeployedModelsRequest
(message)QueryDeployedModelsResponse
(message)QueryExecutionInputsAndOutputsRequest
(message)QuestionAnsweringCorrectnessInput
(message)QuestionAnsweringCorrectnessInstance
(message)QuestionAnsweringCorrectnessResult
(message)QuestionAnsweringCorrectnessSpec
(message)QuestionAnsweringHelpfulnessInput
(message)QuestionAnsweringHelpfulnessInstance
(message)QuestionAnsweringHelpfulnessResult
(message)QuestionAnsweringHelpfulnessSpec
(message)QuestionAnsweringQualityInput
(message)QuestionAnsweringQualityInstance
(message)QuestionAnsweringQualityResult
(message)QuestionAnsweringQualitySpec
(message)QuestionAnsweringRelevanceInput
(message)QuestionAnsweringRelevanceInstance
(message)QuestionAnsweringRelevanceResult
(message)QuestionAnsweringRelevanceSpec
(message)RawPredictRequest
(message)RayMetricSpec
(message)RaySpec
(message)ReadFeatureValuesRequest
(message)ReadFeatureValuesResponse
(message)ReadFeatureValuesResponse.EntityView
(message)ReadFeatureValuesResponse.EntityView.Data
(message)ReadFeatureValuesResponse.FeatureDescriptor
(message)ReadFeatureValuesResponse.Header
(message)ReadTensorboardBlobDataRequest
(message)ReadTensorboardBlobDataResponse
(message)ReadTensorboardSizeRequest
(message)ReadTensorboardSizeResponse
(message)ReadTensorboardTimeSeriesDataRequest
(message)ReadTensorboardTimeSeriesDataResponse
(message)ReadTensorboardUsageRequest
(message)ReadTensorboardUsageResponse
(message)ReadTensorboardUsageResponse.PerMonthUsageData
(message)ReadTensorboardUsageResponse.PerUserUsageData
(message)RebaseTunedModelOperationMetadata
(message)RebaseTunedModelRequest
(message)RebootPersistentResourceOperationMetadata
(message)RebootPersistentResourceRequest
(message)RemoveContextChildrenRequest
(message)RemoveContextChildrenResponse
(message)RemoveDatapointsRequest
(message)RemoveDatapointsResponse
(message)ReservationAffinity
(message)ReservationAffinity.Type
(enum)ResourcePool
(message)ResourcePool.AutoscalingSpec
(message)ResourceRuntime
(message)ResourceRuntimeSpec
(message)ResourcesConsumed
(message)RestoreDatasetVersionOperationMetadata
(message)RestoreDatasetVersionRequest
(message)ResumeModelDeploymentMonitoringJobRequest
(message)ResumeScheduleRequest
(message)Retrieval
(message)RetrievalMetadata
(message)RougeInput
(message)RougeInstance
(message)RougeMetricValue
(message)RougeResults
(message)RougeSpec
(message)SafetyInput
(message)SafetyInstance
(message)SafetyRating
(message)SafetyRating.HarmProbability
(enum)SafetyRating.HarmSeverity
(enum)SafetyResult
(message)SafetySetting
(message)SafetySetting.HarmBlockMethod
(enum)SafetySetting.HarmBlockThreshold
(enum)SafetySpec
(message)SampledShapleyAttribution
(message)SamplingStrategy
(message)SamplingStrategy.RandomSampleConfig
(message)SavedQuery
(message)Scalar
(message)Schedule
(message)Schedule.RunResponse
(message)Schedule.State
(enum)Scheduling
(message)Scheduling.Strategy
(enum)Schema
(message)SearchDataItemsRequest
(message)SearchDataItemsRequest.OrderByAnnotation
(message)SearchDataItemsResponse
(message)SearchEntryPoint
(message)SearchFeaturesRequest
(message)SearchFeaturesResponse
(message)SearchMigratableResourcesRequest
(message)SearchMigratableResourcesResponse
(message)SearchModelDeploymentMonitoringStatsAnomaliesRequest
(message)SearchModelDeploymentMonitoringStatsAnomaliesRequest.StatsAnomaliesObjective
(message)SearchModelDeploymentMonitoringStatsAnomaliesResponse
(message)SearchNearestEntitiesRequest
(message)SearchNearestEntitiesResponse
(message)Segment
(message)ServiceAccountSpec
(message)ShieldedVmConfig
(message)SmoothGradConfig
(message)SpecialistPool
(message)StartNotebookRuntimeOperationMetadata
(message)StartNotebookRuntimeRequest
(message)StartNotebookRuntimeResponse
(message)StopTrialRequest
(message)StratifiedSplit
(message)StreamDirectPredictRequest
(message)StreamDirectPredictResponse
(message)StreamDirectRawPredictRequest
(message)StreamDirectRawPredictResponse
(message)StreamRawPredictRequest
(message)StreamingPredictRequest
(message)StreamingPredictResponse
(message)StreamingRawPredictRequest
(message)StreamingRawPredictResponse
(message)StreamingReadFeatureValuesRequest
(message)StringArray
(message)StructFieldValue
(message)StructValue
(message)Study
(message)Study.State
(enum)StudySpec
(message)StudySpec.Algorithm
(enum)StudySpec.ConvexAutomatedStoppingSpec
(message)StudySpec.DecayCurveAutomatedStoppingSpec
(message)StudySpec.MeasurementSelectionType
(enum)StudySpec.MedianAutomatedStoppingSpec
(message)StudySpec.MetricSpec
(message)StudySpec.MetricSpec.GoalType
(enum)StudySpec.MetricSpec.SafetyMetricConfig
(message)StudySpec.ObservationNoise
(enum)StudySpec.ParameterSpec
(message)StudySpec.ParameterSpec.CategoricalValueSpec
(message)StudySpec.ParameterSpec.ConditionalParameterSpec
(message)StudySpec.ParameterSpec.ConditionalParameterSpec.CategoricalValueCondition
(message)StudySpec.ParameterSpec.ConditionalParameterSpec.DiscreteValueCondition
(message)StudySpec.ParameterSpec.ConditionalParameterSpec.IntValueCondition
(message)StudySpec.ParameterSpec.DiscreteValueSpec
(message)StudySpec.ParameterSpec.DoubleValueSpec
(message)StudySpec.ParameterSpec.IntegerValueSpec
(message)StudySpec.ParameterSpec.ScaleType
(enum)StudySpec.StudyStoppingConfig
(message)StudyTimeConstraint
(message)SuggestTrialsMetadata
(message)SuggestTrialsRequest
(message)SuggestTrialsResponse
(message)SummarizationHelpfulnessInput
(message)SummarizationHelpfulnessInstance
(message)SummarizationHelpfulnessResult
(message)SummarizationHelpfulnessSpec
(message)SummarizationQualityInput
(message)SummarizationQualityInstance
(message)SummarizationQualityResult
(message)SummarizationQualitySpec
(message)SummarizationVerbosityInput
(message)SummarizationVerbosityInstance
(message)SummarizationVerbosityResult
(message)SummarizationVerbositySpec
(message)SupervisedHyperParameters
(message)SupervisedHyperParameters.AdapterSize
(enum)SupervisedTuningDataStats
(message)SupervisedTuningDatasetDistribution
(message)SupervisedTuningDatasetDistribution.DatasetBucket
(message)SupervisedTuningSpec
(message)SyncFeatureViewRequest
(message)SyncFeatureViewResponse
(message)TFRecordDestination
(message)Tensor
(message)Tensor.DataType
(enum)Tensorboard
(message)TensorboardBlob
(message)TensorboardBlobSequence
(message)TensorboardExperiment
(message)TensorboardRun
(message)TensorboardTensor
(message)TensorboardTimeSeries
(message)TensorboardTimeSeries.Metadata
(message)TensorboardTimeSeries.ValueType
(enum)ThresholdConfig
(message)TimeSeriesData
(message)TimeSeriesDataPoint
(message)TimestampSplit
(message)TokensInfo
(message)Tool
(message)ToolCallValidInput
(message)ToolCallValidInstance
(message)ToolCallValidMetricValue
(message)ToolCallValidResults
(message)ToolCallValidSpec
(message)ToolConfig
(message)ToolNameMatchInput
(message)ToolNameMatchInstance
(message)ToolNameMatchMetricValue
(message)ToolNameMatchResults
(message)ToolNameMatchSpec
(message)ToolParameterKVMatchInput
(message)ToolParameterKVMatchInstance
(message)ToolParameterKVMatchMetricValue
(message)ToolParameterKVMatchResults
(message)ToolParameterKVMatchSpec
(message)ToolParameterKeyMatchInput
(message)ToolParameterKeyMatchInstance
(message)ToolParameterKeyMatchMetricValue
(message)ToolParameterKeyMatchResults
(message)ToolParameterKeyMatchSpec
(message)TrainingPipeline
(message)Trial
(message)Trial.Parameter
(message)Trial.State
(enum)TrialContext
(message)TunedModel
(message)TunedModelRef
(message)TuningDataStats
(message)TuningJob
(message)Type
(enum)UndeployIndexOperationMetadata
(message)UndeployIndexRequest
(message)UndeployIndexResponse
(message)UndeployModelOperationMetadata
(message)UndeployModelRequest
(message)UndeployModelResponse
(message)UnmanagedContainerModel
(message)UpdateArtifactRequest
(message)UpdateContextRequest
(message)UpdateDatasetRequest
(message)UpdateDatasetVersionRequest
(message)UpdateDeploymentResourcePoolOperationMetadata
(message)UpdateDeploymentResourcePoolRequest
(message)UpdateEndpointRequest
(message)UpdateEntityTypeRequest
(message)UpdateExecutionRequest
(message)UpdateExplanationDatasetOperationMetadata
(message)UpdateExplanationDatasetRequest
(message)UpdateExplanationDatasetResponse
(message)UpdateFeatureGroupOperationMetadata
(message)UpdateFeatureGroupRequest
(message)UpdateFeatureOnlineStoreOperationMetadata
(message)UpdateFeatureOnlineStoreRequest
(message)UpdateFeatureOperationMetadata
(message)UpdateFeatureRequest
(message)UpdateFeatureViewOperationMetadata
(message)UpdateFeatureViewRequest
(message)UpdateFeaturestoreOperationMetadata
(message)UpdateFeaturestoreRequest
(message)UpdateIndexEndpointRequest
(message)UpdateIndexOperationMetadata
(message)UpdateIndexRequest
(message)UpdateModelDeploymentMonitoringJobOperationMetadata
(message)UpdateModelDeploymentMonitoringJobRequest
(message)UpdateModelRequest
(message)UpdateNotebookRuntimeTemplateRequest
(message)UpdatePersistentResourceOperationMetadata
(message)UpdatePersistentResourceRequest
(message)UpdateScheduleRequest
(message)UpdateSpecialistPoolOperationMetadata
(message)UpdateSpecialistPoolRequest
(message)UpdateTensorboardExperimentRequest
(message)UpdateTensorboardOperationMetadata
(message)UpdateTensorboardRequest
(message)UpdateTensorboardRunRequest
(message)UpdateTensorboardTimeSeriesRequest
(message)UpgradeNotebookRuntimeOperationMetadata
(message)UpgradeNotebookRuntimeRequest
(message)UpgradeNotebookRuntimeResponse
(message)UploadModelOperationMetadata
(message)UploadModelRequest
(message)UploadModelResponse
(message)UpsertDatapointsRequest
(message)UpsertDatapointsResponse
(message)UserActionReference
(message)Value
(message)VertexAISearch
(message)VideoMetadata
(message)WorkerPoolSpec
(message)WriteFeatureValuesPayload
(message)WriteFeatureValuesRequest
(message)WriteFeatureValuesResponse
(message)WriteTensorboardExperimentDataRequest
(message)WriteTensorboardExperimentDataResponse
(message)WriteTensorboardRunDataRequest
(message)WriteTensorboardRunDataResponse
(message)XraiAttribution
(message)
DatasetService
The service that manages Vertex AI Dataset and its child resources.
CreateDataset |
---|
Creates a Dataset.
|
CreateDatasetVersion |
---|
Create a version from a Dataset.
|
DeleteDataset |
---|
Deletes a Dataset.
|
DeleteDatasetVersion |
---|
Deletes a Dataset version.
|
DeleteSavedQuery |
---|
Deletes a SavedQuery.
|
ExportData |
---|
Exports data from a Dataset.
|
GetAnnotationSpec |
---|
Gets an AnnotationSpec.
|
GetDataset |
---|
Gets a Dataset.
|
GetDatasetVersion |
---|
Gets a Dataset version.
|
ImportData |
---|
Imports data into a Dataset.
|
ListAnnotations |
---|
Lists Annotations belongs to a dataitem This RPC is only available in InternalDatasetService. It is only used for exporting conversation data to CCAI Insights.
|
ListDataItems |
---|
Lists DataItems in a Dataset.
|
ListDatasetVersions |
---|
Lists DatasetVersions in a Dataset.
|
ListDatasets |
---|
Lists Datasets in a Location.
|
ListSavedQueries |
---|
Lists SavedQueries in a Dataset.
|
RestoreDatasetVersion |
---|
Restores a dataset version.
|
SearchDataItems |
---|
Searches DataItems in a Dataset.
|
UpdateDataset |
---|
Updates a Dataset.
|
UpdateDatasetVersion |
---|
Updates a DatasetVersion.
|
DeploymentResourcePoolService
A service that manages the DeploymentResourcePool resource.
CreateDeploymentResourcePool |
---|
Create a DeploymentResourcePool.
|
DeleteDeploymentResourcePool |
---|
Delete a DeploymentResourcePool.
|
GetDeploymentResourcePool |
---|
Get a DeploymentResourcePool.
|
ListDeploymentResourcePools |
---|
List DeploymentResourcePools in a location.
|
QueryDeployedModels |
---|
List DeployedModels that have been deployed on this DeploymentResourcePool.
|
UpdateDeploymentResourcePool |
---|
Update a DeploymentResourcePool.
|
EndpointService
A service for managing Vertex AI's Endpoints.
CreateEndpoint |
---|
Creates an Endpoint.
|
DeleteEndpoint |
---|
Deletes an Endpoint.
|
DeployModel |
---|
Deploys a Model into this Endpoint, creating a DeployedModel within it.
|
GetEndpoint |
---|
Gets an Endpoint.
|
ListEndpoints |
---|
Lists Endpoints in a Location.
|
MutateDeployedModel |
---|
Updates an existing deployed model. Updatable fields include
|
UndeployModel |
---|
Undeploys a Model from an Endpoint, removing a DeployedModel from it, and freeing all resources it's using.
|
UpdateEndpoint |
---|
Updates an Endpoint.
|
EvaluationService
Vertex AI Online Evaluation Service.
EvaluateInstances |
---|
Evaluates instances based on a given metric.
|
FeatureOnlineStoreAdminService
The service that handles CRUD and List for resources for FeatureOnlineStore.
CreateFeatureOnlineStore |
---|
Creates a new FeatureOnlineStore in a given project and location.
|
CreateFeatureView |
---|
Creates a new FeatureView in a given FeatureOnlineStore.
|
DeleteFeatureOnlineStore |
---|
Deletes a single FeatureOnlineStore. The FeatureOnlineStore must not contain any FeatureViews.
|
DeleteFeatureView |
---|
Deletes a single FeatureView.
|
GetFeatureOnlineStore |
---|
Gets details of a single FeatureOnlineStore.
|
GetFeatureView |
---|
Gets details of a single FeatureView.
|
GetFeatureViewSync |
---|
Gets details of a single FeatureViewSync.
|
ListFeatureOnlineStores |
---|
Lists FeatureOnlineStores in a given project and location.
|
ListFeatureViewSyncs |
---|
Lists FeatureViewSyncs in a given FeatureView.
|
ListFeatureViews |
---|
Lists FeatureViews in a given FeatureOnlineStore.
|
SyncFeatureView |
---|
Triggers on-demand sync for the FeatureView.
|
UpdateFeatureOnlineStore |
---|
Updates the parameters of a single FeatureOnlineStore.
|
UpdateFeatureView |
---|
Updates the parameters of a single FeatureView.
|
FeatureOnlineStoreService
A service for fetching feature values from the online store.
FetchFeatureValues |
---|
Fetch feature values under a FeatureView.
|
SearchNearestEntities |
---|
Search the nearest entities under a FeatureView. Search only works for indexable feature view; if a feature view isn't indexable, returns Invalid argument response.
|
FeatureRegistryService
The service that handles CRUD and List for resources for FeatureRegistry.
BatchCreateFeatures |
---|
Creates a batch of Features in a given FeatureGroup.
|
CreateFeature |
---|
Creates a new Feature in a given FeatureGroup.
|
CreateFeatureGroup |
---|
Creates a new FeatureGroup in a given project and location.
|
DeleteFeature |
---|
Deletes a single Feature.
|
DeleteFeatureGroup |
---|
Deletes a single FeatureGroup.
|
GetFeature |
---|
Gets details of a single Feature.
|
GetFeatureGroup |
---|
Gets details of a single FeatureGroup.
|
ListFeatureGroups |
---|
Lists FeatureGroups in a given project and location.
|
ListFeatures |
---|
Lists Features in a given FeatureGroup.
|
UpdateFeature |
---|
Updates the parameters of a single Feature.
|
UpdateFeatureGroup |
---|
Updates the parameters of a single FeatureGroup.
|
FeaturestoreOnlineServingService
A service for serving online feature values.
ReadFeatureValues |
---|
Reads Feature values of a specific entity of an EntityType. For reading feature values of multiple entities of an EntityType, please use StreamingReadFeatureValues.
|
StreamingReadFeatureValues |
---|
Reads Feature values for multiple entities. Depending on their size, data for different entities may be broken up across multiple responses.
|
WriteFeatureValues |
---|
Writes Feature values of one or more entities of an EntityType. The Feature values are merged into existing entities if any. The Feature values to be written must have timestamp within the online storage retention.
|
FeaturestoreService
The service that handles CRUD and List for resources for Featurestore.
BatchCreateFeatures |
---|
Creates a batch of Features in a given EntityType.
|
BatchReadFeatureValues |
---|
Batch reads Feature values from a Featurestore. This API enables batch reading Feature values, where each read instance in the batch may read Feature values of entities from one or more EntityTypes. Point-in-time correctness is guaranteed for Feature values of each read instance as of each instance's read timestamp.
|
CreateEntityType |
---|
Creates a new EntityType in a given Featurestore.
|
CreateFeature |
---|
Creates a new Feature in a given EntityType.
|
CreateFeaturestore |
---|
Creates a new Featurestore in a given project and location.
|
DeleteEntityType |
---|
Deletes a single EntityType. The EntityType must not have any Features or
|
DeleteFeature |
---|
Deletes a single Feature.
|
DeleteFeatureValues |
---|
Delete Feature values from Featurestore. The progress of the deletion is tracked by the returned operation. The deleted feature values are guaranteed to be invisible to subsequent read operations after the operation is marked as successfully done. If a delete feature values operation fails, the feature values returned from reads and exports may be inconsistent. If consistency is required, the caller must retry the same delete request again and wait till the new operation returned is marked as successfully done.
|
DeleteFeaturestore |
---|
Deletes a single Featurestore. The Featurestore must not contain any EntityTypes or
|
ExportFeatureValues |
---|
Exports Feature values from all the entities of a target EntityType.
|
GetEntityType |
---|
Gets details of a single EntityType.
|
GetFeature |
---|
Gets details of a single Feature.
|
GetFeaturestore |
---|
Gets details of a single Featurestore.
|
ImportFeatureValues |
---|
Imports Feature values into the Featurestore from a source storage. The progress of the import is tracked by the returned operation. The imported features are guaranteed to be visible to subsequent read operations after the operation is marked as successfully done. If an import operation fails, the Feature values returned from reads and exports may be inconsistent. If consistency is required, the caller must retry the same import request again and wait till the new operation returned is marked as successfully done. There are also scenarios where the caller can cause inconsistency.
|
ListEntityTypes |
---|
Lists EntityTypes in a given Featurestore.
|
ListFeatures |
---|
Lists Features in a given EntityType.
|
ListFeaturestores |
---|
Lists Featurestores in a given project and location.
|
SearchFeatures |
---|
Searches Features matching a query in a given project.
|
UpdateEntityType |
---|
Updates the parameters of a single EntityType.
|
UpdateFeature |
---|
Updates the parameters of a single Feature.
|
UpdateFeaturestore |
---|
Updates the parameters of a single Featurestore.
|
GenAiTuningService
A service for creating and managing GenAI Tuning Jobs.
CancelTuningJob |
---|
Cancels a TuningJob. Starts asynchronous cancellation on the TuningJob. The server makes a best effort to cancel the job, but success is not guaranteed. Clients can use
|
CreateTuningJob |
---|
Creates a TuningJob. A created TuningJob right away will be attempted to be run.
|
GetTuningJob |
---|
Gets a TuningJob.
|
ListTuningJobs |
---|
Lists TuningJobs in a Location.
|
RebaseTunedModel |
---|
Rebase a TunedModel.
|
IndexEndpointService
A service for managing Vertex AI's IndexEndpoints.
CreateIndexEndpoint |
---|
Creates an IndexEndpoint.
|
DeleteIndexEndpoint |
---|
Deletes an IndexEndpoint.
|
DeployIndex |
---|
Deploys an Index into this IndexEndpoint, creating a DeployedIndex within it. Only non-empty Indexes can be deployed.
|
GetIndexEndpoint |
---|
Gets an IndexEndpoint.
|
ListIndexEndpoints |
---|
Lists IndexEndpoints in a Location.
|
MutateDeployedIndex |
---|
Update an existing DeployedIndex under an IndexEndpoint.
|
UndeployIndex |
---|
Undeploys an Index from an IndexEndpoint, removing a DeployedIndex from it, and freeing all resources it's using.
|
UpdateIndexEndpoint |
---|
Updates an IndexEndpoint.
|
IndexService
A service for creating and managing Vertex AI's Index resources.
CreateIndex |
---|
Creates an Index.
|
DeleteIndex |
---|
Deletes an Index. An Index can only be deleted when all its
|
GetIndex |
---|
Gets an Index.
|
ListIndexes |
---|
Lists Indexes in a Location.
|
RemoveDatapoints |
---|
Remove Datapoints from an Index.
|
UpdateIndex |
---|
Updates an Index.
|
UpsertDatapoints |
---|
Add/update Datapoints into an Index.
|
JobService
A service for creating and managing Vertex AI's jobs.
CancelBatchPredictionJob |
---|
Cancels a BatchPredictionJob. Starts asynchronous cancellation on the BatchPredictionJob. The server makes the best effort to cancel the job, but success is not guaranteed. Clients can use
|
CancelCustomJob |
---|
Cancels a CustomJob. Starts asynchronous cancellation on the CustomJob. The server makes a best effort to cancel the job, but success is not guaranteed. Clients can use
|
CancelHyperparameterTuningJob |
---|
Cancels a HyperparameterTuningJob. Starts asynchronous cancellation on the HyperparameterTuningJob. The server makes a best effort to cancel the job, but success is not guaranteed. Clients can use
|
CancelNasJob |
---|
Cancels a NasJob. Starts asynchronous cancellation on the NasJob. The server makes a best effort to cancel the job, but success is not guaranteed. Clients can use
|
CreateBatchPredictionJob |
---|
Creates a BatchPredictionJob. A BatchPredictionJob once created will right away be attempted to start.
|
CreateCustomJob |
---|
Creates a CustomJob. A created CustomJob right away will be attempted to be run.
|
CreateHyperparameterTuningJob |
---|
Creates a HyperparameterTuningJob
|
CreateModelDeploymentMonitoringJob |
---|
Creates a ModelDeploymentMonitoringJob. It will run periodically on a configured interval.
|
CreateNasJob |
---|
Creates a NasJob
|
DeleteBatchPredictionJob |
---|
Deletes a BatchPredictionJob. Can only be called on jobs that already finished.
|
DeleteCustomJob |
---|
Deletes a CustomJob.
|
DeleteHyperparameterTuningJob |
---|
Deletes a HyperparameterTuningJob.
|
DeleteModelDeploymentMonitoringJob |
---|
Deletes a ModelDeploymentMonitoringJob.
|
DeleteNasJob |
---|
Deletes a NasJob.
|
GetBatchPredictionJob |
---|
Gets a BatchPredictionJob
|
GetCustomJob |
---|
Gets a CustomJob.
|
GetHyperparameterTuningJob |
---|
Gets a HyperparameterTuningJob
|
GetModelDeploymentMonitoringJob |
---|
Gets a ModelDeploymentMonitoringJob.
|
GetNasJob |
---|
Gets a NasJob
|
GetNasTrialDetail |
---|
Gets a NasTrialDetail.
|
ListBatchPredictionJobs |
---|
Lists BatchPredictionJobs in a Location.
|
ListCustomJobs |
---|
Lists CustomJobs in a Location.
|
ListHyperparameterTuningJobs |
---|
Lists HyperparameterTuningJobs in a Location.
|
ListModelDeploymentMonitoringJobs |
---|
Lists ModelDeploymentMonitoringJobs in a Location.
|
ListNasJobs |
---|
Lists NasJobs in a Location.
|
ListNasTrialDetails |
---|
List top NasTrialDetails of a NasJob.
|
PauseModelDeploymentMonitoringJob |
---|
Pauses a ModelDeploymentMonitoringJob. If the job is running, the server makes a best effort to cancel the job. Will mark
|
ResumeModelDeploymentMonitoringJob |
---|
Resumes a paused ModelDeploymentMonitoringJob. It will start to run from next scheduled time. A deleted ModelDeploymentMonitoringJob can't be resumed.
|
SearchModelDeploymentMonitoringStatsAnomalies |
---|
Searches Model Monitoring Statistics generated within a given time window.
|
UpdateModelDeploymentMonitoringJob |
---|
Updates a ModelDeploymentMonitoringJob.
|
LlmUtilityService
Service for LLM related utility functions.
ComputeTokens |
---|
Return a list of tokens based on the input text.
|
CountTokens |
---|
Perform a token counting.
|
MatchService
MatchService is a Google managed service for efficient vector similarity search at scale.
MetadataService
Service for reading and writing metadata entries.
AddContextArtifactsAndExecutions |
---|
Adds a set of Artifacts and Executions to a Context. If any of the Artifacts or Executions have already been added to a Context, they are simply skipped.
|
AddContextChildren |
---|
Adds a set of Contexts as children to a parent Context. If any of the child Contexts have already been added to the parent Context, they are simply skipped. If this call would create a cycle or cause any Context to have more than 10 parents, the request will fail with an INVALID_ARGUMENT error.
|
AddExecutionEvents |
---|
Adds Events to the specified Execution. An Event indicates whether an Artifact was used as an input or output for an Execution. If an Event already exists between the Execution and the Artifact, the Event is skipped.
|
CreateArtifact |
---|
Creates an Artifact associated with a MetadataStore.
|
CreateContext |
---|
Creates a Context associated with a MetadataStore.
|
CreateExecution |
---|
Creates an Execution associated with a MetadataStore.
|
CreateMetadataSchema |
---|
Creates a MetadataSchema.
|
CreateMetadataStore |
---|
Initializes a MetadataStore, including allocation of resources.
|
DeleteArtifact |
---|
Deletes an Artifact.
|
DeleteContext |
---|
Deletes a stored Context.
|
DeleteExecution |
---|
Deletes an Execution.
|
DeleteMetadataStore |
---|
Deletes a single MetadataStore and all its child resources (Artifacts, Executions, and Contexts).
|
GetArtifact |
---|
Retrieves a specific Artifact.
|
GetContext |
---|
Retrieves a specific Context.
|
GetExecution |
---|
Retrieves a specific Execution.
|
GetMetadataSchema |
---|
Retrieves a specific MetadataSchema.
|
GetMetadataStore |
---|
Retrieves a specific MetadataStore.
|
ListArtifacts |
---|
Lists Artifacts in the MetadataStore.
|
ListContexts |
---|
Lists Contexts on the MetadataStore.
|
ListExecutions |
---|
Lists Executions in the MetadataStore.
|
ListMetadataSchemas |
---|
Lists MetadataSchemas.
|
ListMetadataStores |
---|
Lists MetadataStores for a Location.
|
PurgeArtifacts |
---|
Purges Artifacts.
|
PurgeContexts |
---|
Purges Contexts.
|
PurgeExecutions |
---|
Purges Executions.
|
QueryArtifactLineageSubgraph |
---|
Retrieves lineage of an Artifact represented through Artifacts and Executions connected by Event edges and returned as a LineageSubgraph.
|
QueryContextLineageSubgraph |
---|
Retrieves Artifacts and Executions within the specified Context, connected by Event edges and returned as a LineageSubgraph.
|
QueryExecutionInputsAndOutputs |
---|
Obtains the set of input and output Artifacts for this Execution, in the form of LineageSubgraph that also contains the Execution and connecting Events.
|
RemoveContextChildren |
---|
Remove a set of children contexts from a parent Context. If any of the child Contexts were NOT added to the parent Context, they are simply skipped.
|
UpdateArtifact |
---|
Updates a stored Artifact.
|
UpdateContext |
---|
Updates a stored Context.
|
UpdateExecution |
---|
Updates a stored Execution.
|
MigrationService
A service that migrates resources from automl.googleapis.com, datalabeling.googleapis.com and ml.googleapis.com to Vertex AI.
BatchMigrateResources |
---|
Batch migrates resources from ml.googleapis.com, automl.googleapis.com, and datalabeling.googleapis.com to Vertex AI.
|
SearchMigratableResources |
---|
Searches all of the resources in automl.googleapis.com, datalabeling.googleapis.com and ml.googleapis.com that can be migrated to Vertex AI's given location.
|
ModelGardenService
The interface of Model Garden Service.
GetPublisherModel |
---|
Gets a Model Garden publisher model.
|
ModelService
A service for managing Vertex AI's machine learning Models.
BatchImportEvaluatedAnnotations |
---|
Imports a list of externally generated EvaluatedAnnotations.
|
BatchImportModelEvaluationSlices |
---|
Imports a list of externally generated ModelEvaluationSlice.
|
CopyModel |
---|
Copies an already existing Vertex AI Model into the specified Location. The source Model must exist in the same Project. When copying custom Models, the users themselves are responsible for
|
DeleteModel |
---|
Deletes a Model. A model cannot be deleted if any
|
DeleteModelVersion |
---|
Deletes a Model version. Model version can only be deleted if there are no
|
ExportModel |
---|
Exports a trained, exportable Model to a location specified by the user. A Model is considered to be exportable if it has at least one
|
GetModel |
---|
Gets a Model.
|
GetModelEvaluation |
---|
Gets a ModelEvaluation.
|
GetModelEvaluationSlice |
---|
Gets a ModelEvaluationSlice.
|
ImportModelEvaluation |
---|
Imports an externally generated ModelEvaluation.
|
ListModelEvaluationSlices |
---|
Lists ModelEvaluationSlices in a ModelEvaluation.
|
ListModelEvaluations |
---|
Lists ModelEvaluations in a Model.
|
ListModelVersions |
---|
Lists versions of the specified model.
|
ListModels |
---|
Lists Models in a Location.
|
MergeVersionAliases |
---|
Merges a set of aliases for a Model version.
|
UpdateExplanationDataset |
---|
Incrementally update the dataset used for an examples model.
|
UpdateModel |
---|
Updates a Model.
|
UploadModel |
---|
Uploads a Model artifact into Vertex AI.
|
NotebookService
The interface for Vertex Notebook service (a.k.a. Colab on Workbench).
AssignNotebookRuntime |
---|
Assigns a NotebookRuntime to a user for a particular Notebook file. This method will either returns an existing assignment or generates a new one.
|
CreateNotebookExecutionJob |
---|
Creates a NotebookExecutionJob.
|
CreateNotebookRuntimeTemplate |
---|
Creates a NotebookRuntimeTemplate.
|
DeleteNotebookExecutionJob |
---|
Deletes a NotebookExecutionJob.
|
DeleteNotebookRuntime |
---|
Deletes a NotebookRuntime.
|
DeleteNotebookRuntimeTemplate |
---|
Deletes a NotebookRuntimeTemplate.
|
GetNotebookExecutionJob |
---|
Gets a NotebookExecutionJob.
|
GetNotebookRuntime |
---|
Gets a NotebookRuntime.
|
GetNotebookRuntimeTemplate |
---|
Gets a NotebookRuntimeTemplate.
|
ListNotebookExecutionJobs |
---|
Lists NotebookExecutionJobs in a Location.
|
ListNotebookRuntimeTemplates |
---|
Lists NotebookRuntimeTemplates in a Location.
|
ListNotebookRuntimes |
---|
Lists NotebookRuntimes in a Location.
|
StartNotebookRuntime |
---|
Starts a NotebookRuntime.
|
UpdateNotebookRuntimeTemplate |
---|
Updates a NotebookRuntimeTemplate.
|
UpgradeNotebookRuntime |
---|
Upgrades a NotebookRuntime.
|
PersistentResourceService
A service for managing Vertex AI's machine learning PersistentResource.
CreatePersistentResource |
---|
Creates a PersistentResource.
|
DeletePersistentResource |
---|
Deletes a PersistentResource.
|
GetPersistentResource |
---|
Gets a PersistentResource.
|
ListPersistentResources |
---|
Lists PersistentResources in a Location.
|
RebootPersistentResource |
---|
Reboots a PersistentResource.
|
UpdatePersistentResource |
---|
Updates a PersistentResource.
|
PipelineService
A service for creating and managing Vertex AI's pipelines. This includes both TrainingPipeline
resources (used for AutoML and custom training) and PipelineJob
resources (used for Vertex AI Pipelines).
BatchCancelPipelineJobs |
---|
Batch cancel PipelineJobs. Firstly the server will check if all the jobs are in non-terminal states, and skip the jobs that are already terminated. If the operation failed, none of the pipeline jobs are cancelled. The server will poll the states of all the pipeline jobs periodically to check the cancellation status. This operation will return an LRO.
|
BatchDeletePipelineJobs |
---|
Batch deletes PipelineJobs The Operation is atomic. If it fails, none of the PipelineJobs are deleted. If it succeeds, all of the PipelineJobs are deleted.
|
CancelPipelineJob |
---|
Cancels a PipelineJob. Starts asynchronous cancellation on the PipelineJob. The server makes a best effort to cancel the pipeline, but success is not guaranteed. Clients can use
|
CancelTrainingPipeline |
---|
Cancels a TrainingPipeline. Starts asynchronous cancellation on the TrainingPipeline. The server makes a best effort to cancel the pipeline, but success is not guaranteed. Clients can use
|
CreatePipelineJob |
---|
Creates a PipelineJob. A PipelineJob will run immediately when created.
|
CreateTrainingPipeline |
---|
Creates a TrainingPipeline. A created TrainingPipeline right away will be attempted to be run.
|
DeletePipelineJob |
---|
Deletes a PipelineJob.
|
DeleteTrainingPipeline |
---|
Deletes a TrainingPipeline.
|
GetPipelineJob |
---|
Gets a PipelineJob.
|
GetTrainingPipeline |
---|
Gets a TrainingPipeline.
|
ListPipelineJobs |
---|
Lists PipelineJobs in a Location.
|
ListTrainingPipelines |
---|
Lists TrainingPipelines in a Location.
|
PredictionService
A service for online predictions and explanations.
DirectPredict |
---|
Perform an unary online prediction request to a gRPC model server for Vertex first-party products and frameworks.
|
DirectRawPredict |
---|
Perform an unary online prediction request to a gRPC model server for custom containers.
|
Explain |
---|
Perform an online explanation. If
|
GenerateContent |
---|
Generate content with multimodal inputs.
|
Predict |
---|
Perform an online prediction.
|
RawPredict |
---|
Perform an online prediction with an arbitrary HTTP payload. The response includes the following HTTP headers:
|
ServerStreamingPredict |
---|
Perform a server-side streaming online prediction request for Vertex LLM streaming.
|
StreamDirectPredict |
---|
Perform a streaming online prediction request to a gRPC model server for Vertex first-party products and frameworks.
|
StreamDirectRawPredict |
---|
Perform a streaming online prediction request to a gRPC model server for custom containers.
|
StreamGenerateContent |
---|
Generate content with multimodal inputs with streaming support.
|
StreamRawPredict |
---|
Perform a streaming online prediction with an arbitrary HTTP payload.
|
StreamingPredict |
---|
Perform a streaming online prediction request for Vertex first-party products and frameworks.
|
StreamingRawPredict |
---|
Perform a streaming online prediction request through gRPC.
|
ScheduleService
A service for creating and managing Vertex AI's Schedule resources to periodically launch shceudled runs to make API calls.
CreateSchedule |
---|
Creates a Schedule.
|
DeleteSchedule |
---|
Deletes a Schedule.
|
GetSchedule |
---|
Gets a Schedule.
|
ListSchedules |
---|
Lists Schedules in a Location.
|
PauseSchedule |
---|
Pauses a Schedule. Will mark
|
ResumeSchedule |
---|
Resumes a paused Schedule to start scheduling new runs. Will mark When the Schedule is resumed, new runs will be scheduled starting from the next execution time after the current time based on the time_specification in the Schedule. If [Schedule.catchUp][] is set up true, all missed runs will be scheduled for backfill first.
|
UpdateSchedule |
---|
Updates an active or paused Schedule. When the Schedule is updated, new runs will be scheduled starting from the updated next execution time after the update time based on the time_specification in the updated Schedule. All unstarted runs before the update time will be skipped while already created runs will NOT be paused or canceled.
|
SpecialistPoolService
A service for creating and managing Customer SpecialistPools. When customers start Data Labeling jobs, they can reuse/create Specialist Pools to bring their own Specialists to label the data. Customers can add/remove Managers for the Specialist Pool on Cloud console, then Managers will get email notifications to manage Specialists and tasks on CrowdCompute console.
CreateSpecialistPool |
---|
Creates a SpecialistPool.
|
DeleteSpecialistPool |
---|
Deletes a SpecialistPool as well as all Specialists in the pool.
|
GetSpecialistPool |
---|
Gets a SpecialistPool.
|
ListSpecialistPools |
---|
Lists SpecialistPools in a Location.
|
UpdateSpecialistPool |
---|
Updates a SpecialistPool.
|
TensorboardService
TensorboardService
BatchCreateTensorboardRuns |
---|
Batch create TensorboardRuns.
|
BatchCreateTensorboardTimeSeries |
---|
Batch create TensorboardTimeSeries that belong to a TensorboardExperiment.
|
BatchReadTensorboardTimeSeriesData |
---|
Reads multiple TensorboardTimeSeries' data. The data point number limit is 1000 for scalars, 100 for tensors and blob references. If the number of data points stored is less than the limit, all data is returned. Otherwise, the number limit of data points is randomly selected from this time series and returned.
|
CreateTensorboard |
---|
Creates a Tensorboard.
|
CreateTensorboardExperiment |
---|
Creates a TensorboardExperiment.
|
CreateTensorboardRun |
---|
Creates a TensorboardRun.
|
CreateTensorboardTimeSeries |
---|
Creates a TensorboardTimeSeries.
|
DeleteTensorboard |
---|
Deletes a Tensorboard.
|
DeleteTensorboardExperiment |
---|
Deletes a TensorboardExperiment.
|
DeleteTensorboardRun |
---|
Deletes a TensorboardRun.
|
DeleteTensorboardTimeSeries |
---|
Deletes a TensorboardTimeSeries.
|
ExportTensorboardTimeSeriesData |
---|
Exports a TensorboardTimeSeries' data. Data is returned in paginated responses.
|
GetTensorboard |
---|
Gets a Tensorboard.
|
GetTensorboardExperiment |
---|
Gets a TensorboardExperiment.
|
GetTensorboardRun |
---|
Gets a TensorboardRun.
|
GetTensorboardTimeSeries |
---|
Gets a TensorboardTimeSeries.
|
ListTensorboardExperiments |
---|
Lists TensorboardExperiments in a Location.
|
ListTensorboardRuns |
---|
Lists TensorboardRuns in a Location.
|
ListTensorboardTimeSeries |
---|
Lists TensorboardTimeSeries in a Location.
|
ListTensorboards |
---|
Lists Tensorboards in a Location.
|
ReadTensorboardBlobData |
---|
Gets bytes of TensorboardBlobs. This is to allow reading blob data stored in consumer project's Cloud Storage bucket without users having to obtain Cloud Storage access permission.
|
ReadTensorboardSize |
---|
Returns the storage size for a given TensorBoard instance.
|
ReadTensorboardTimeSeriesData |
---|
Reads a TensorboardTimeSeries' data. By default, if the number of data points stored is less than 1000, all data is returned. Otherwise, 1000 data points is randomly selected from this time series and returned. This value can be changed by changing max_data_points, which can't be greater than 10k.
|
ReadTensorboardUsage |
---|
Returns a list of monthly active users for a given TensorBoard instance.
|
UpdateTensorboard |
---|
Updates a Tensorboard.
|
UpdateTensorboardExperiment |
---|
Updates a TensorboardExperiment.
|
UpdateTensorboardRun |
---|
Updates a TensorboardRun.
|
UpdateTensorboardTimeSeries |
---|
Updates a TensorboardTimeSeries.
|
WriteTensorboardExperimentData |
---|
Write time series data points of multiple TensorboardTimeSeries in multiple TensorboardRun's. If any data fail to be ingested, an error is returned.
|
WriteTensorboardRunData |
---|
Write time series data points into multiple TensorboardTimeSeries under a TensorboardRun. If any data fail to be ingested, an error is returned.
|
VizierService
Vertex AI Vizier API.
Vertex AI Vizier is a service to solve blackbox optimization problems, such as tuning machine learning hyperparameters and searching over deep learning architectures.
AddTrialMeasurement |
---|
Adds a measurement of the objective metrics to a Trial. This measurement is assumed to have been taken before the Trial is complete.
|
CheckTrialEarlyStoppingState |
---|
Checks whether a Trial should stop or not. Returns a long-running operation. When the operation is successful, it will contain a
|
CompleteTrial |
---|
Marks a Trial as complete.
|
CreateStudy |
---|
Creates a Study. A resource name will be generated after creation of the Study.
|
CreateTrial |
---|
Adds a user provided Trial to a Study.
|
DeleteStudy |
---|
Deletes a Study.
|
DeleteTrial |
---|
Deletes a Trial.
|
GetStudy |
---|
Gets a Study by name.
|
GetTrial |
---|
Gets a Trial.
|
ListOptimalTrials |
---|
Lists the pareto-optimal Trials for multi-objective Study or the optimal Trials for single-objective Study. The definition of pareto-optimal can be checked in wiki page. https://en.wikipedia.org/wiki/Pareto_efficiency
|
ListStudies |
---|
Lists all the studies in a region for an associated project.
|
ListTrials |
---|
Lists the Trials associated with a Study.
|
LookupStudy |
---|
Looks a study up using the user-defined display_name field instead of the fully qualified resource name.
|
StopTrial |
---|
Stops a Trial.
|
SuggestTrials |
---|
Adds one or more Trials to a Study, with parameter values suggested by Vertex AI Vizier. Returns a long-running operation associated with the generation of Trial suggestions. When this long-running operation succeeds, it will contain a
|
AcceleratorType
Represents a hardware accelerator type.
Enums | |
---|---|
ACCELERATOR_TYPE_UNSPECIFIED |
Unspecified accelerator type, which means no accelerator. |
NVIDIA_TESLA_K80 |
Deprecated: Nvidia Tesla K80 GPU has reached end of support, see https://cloud.google.com/compute/docs/eol/k80-eol. |
NVIDIA_TESLA_P100 |
Nvidia Tesla P100 GPU. |
NVIDIA_TESLA_V100 |
Nvidia Tesla V100 GPU. |
NVIDIA_TESLA_P4 |
Nvidia Tesla P4 GPU. |
NVIDIA_TESLA_T4 |
Nvidia Tesla T4 GPU. |
NVIDIA_TESLA_A100 |
Nvidia Tesla A100 GPU. |
NVIDIA_A100_80GB |
Nvidia A100 80GB GPU. |
NVIDIA_L4 |
Nvidia L4 GPU. |
NVIDIA_H100_80GB |
Nvidia H100 80Gb GPU. |
TPU_V2 |
TPU v2. |
TPU_V3 |
TPU v3. |
TPU_V4_POD |
TPU v4. |
TPU_V5_LITEPOD |
TPU v5. |
AddContextArtifactsAndExecutionsRequest
Request message for MetadataService.AddContextArtifactsAndExecutions
.
context
string
Required. The resource name of the Context that the Artifacts and Executions belong to. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/contexts/{context}
artifacts[]
string
The resource names of the Artifacts to attribute to the Context.
Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/artifacts/{artifact}
executions[]
string
The resource names of the Executions to associate with the Context.
Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/executions/{execution}
AddContextArtifactsAndExecutionsResponse
This type has no fields.
Response message for MetadataService.AddContextArtifactsAndExecutions
.
AddContextChildrenRequest
Request message for MetadataService.AddContextChildren
.
context
string
Required. The resource name of the parent Context.
Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/contexts/{context}
child_contexts[]
string
The resource names of the child Contexts.
AddContextChildrenResponse
This type has no fields.
Response message for MetadataService.AddContextChildren
.
AddExecutionEventsRequest
Request message for MetadataService.AddExecutionEvents
.
execution
string
Required. The resource name of the Execution that the Events connect Artifacts with. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/executions/{execution}
The Events to create and add.
AddExecutionEventsResponse
This type has no fields.
Response message for MetadataService.AddExecutionEvents
.
AddTrialMeasurementRequest
Request message for VizierService.AddTrialMeasurement
.
trial_name
string
Required. The name of the trial to add measurement. Format: projects/{project}/locations/{location}/studies/{study}/trials/{trial}
Required. The measurement to be added to a Trial.
Annotation
Used to assign specific AnnotationSpec to a particular area of a DataItem or the whole part of the DataItem.
name
string
Output only. Resource name of the Annotation.
payload_schema_uri
string
Required. Google Cloud Storage URI points to a YAML file describing payload
. The schema is defined as an OpenAPI 3.0.2 Schema Object. The schema files that can be used here are found in gs://google-cloud-aiplatform/schema/dataset/annotation/, note that the chosen schema must be consistent with the parent Dataset's metadata
.
Required. The schema of the payload can be found in payload_schema
.
Output only. Timestamp when this Annotation was created.
Output only. Timestamp when this Annotation was last updated.
etag
string
Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
Output only. The source of the Annotation.
labels
map<string, string>
Optional. The labels with user-defined metadata to organize your Annotations.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Annotation(System labels are excluded).
See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. Following system labels exist for each Annotation:
"aiplatform.googleapis.com/annotation_set_name": optional, name of the UI's annotation set this Annotation belongs to. If not set, the Annotation is not visible in the UI.
"aiplatform.googleapis.com/payload_schema": output only, its value is the
payload_schema's
title.
AnnotationSpec
Identifies a concept with which DataItems may be annotated with.
name
string
Output only. Resource name of the AnnotationSpec.
display_name
string
Required. The user-defined name of the AnnotationSpec. The name can be up to 128 characters long and can consist of any UTF-8 characters.
Output only. Timestamp when this AnnotationSpec was created.
Output only. Timestamp when AnnotationSpec was last updated.
etag
string
Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
Artifact
Instance of a general artifact.
name
string
Output only. The resource name of the Artifact.
display_name
string
User provided display name of the Artifact. May be up to 128 Unicode characters.
uri
string
The uniform resource identifier of the artifact file. May be empty if there is no actual artifact file.
etag
string
An eTag used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
labels
map<string, string>
The labels with user-defined metadata to organize your Artifacts.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Artifact (System labels are excluded).
Output only. Timestamp when this Artifact was created.
Output only. Timestamp when this Artifact was last updated.
The state of this Artifact. This is a property of the Artifact, and does not imply or capture any ongoing process. This property is managed by clients (such as Vertex AI Pipelines), and the system does not prescribe or check the validity of state transitions.
schema_title
string
The title of the schema describing the metadata.
Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.
schema_version
string
The version of the schema in schema_name to use.
Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.
Properties of the Artifact. Top level metadata keys' heading and trailing spaces will be trimmed. The size of this field should not exceed 200KB.
description
string
Description of the Artifact
State
Describes the state of the Artifact.
Enums | |
---|---|
STATE_UNSPECIFIED |
Unspecified state for the Artifact. |
PENDING |
A state used by systems like Vertex AI Pipelines to indicate that the underlying data item represented by this Artifact is being created. |
LIVE |
A state indicating that the Artifact should exist, unless something external to the system deletes it. |
AssignNotebookRuntimeOperationMetadata
Metadata information for NotebookService.AssignNotebookRuntime
.
The operation generic information.
progress_message
string
A human-readable message that shows the intermediate progress details of NotebookRuntime.
AssignNotebookRuntimeRequest
Request message for NotebookService.AssignNotebookRuntime
.
parent
string
Required. The resource name of the Location to get the NotebookRuntime assignment. Format: projects/{project}/locations/{location}
notebook_runtime_template
string
Required. The resource name of the NotebookRuntimeTemplate based on which a NotebookRuntime will be assigned (reuse or create a new one).
Required. Provide runtime specific information (e.g. runtime owner, notebook id) used for NotebookRuntime assignment.
notebook_runtime_id
string
Optional. User specified ID for the notebook runtime.
Attribution
Attribution that explains a particular prediction output.
baseline_output_value
double
Output only. Model predicted output if the input instance is constructed from the baselines of all the features defined in ExplanationMetadata.inputs
. The field name of the output is determined by the key in ExplanationMetadata.outputs
.
If the Model's predicted output has multiple dimensions (rank > 1), this is the value in the output located by output_index
.
If there are multiple baselines, their output values are averaged.
instance_output_value
double
Output only. Model predicted output on the corresponding [explanation instance][ExplainRequest.instances]. The field name of the output is determined by the key in ExplanationMetadata.outputs
.
If the Model predicted output has multiple dimensions, this is the value in the output located by output_index
.
Output only. Attributions of each explained feature. Features are extracted from the prediction instances
according to explanation metadata for inputs
.
The value is a struct, whose keys are the name of the feature. The values are how much the feature in the instance
contributed to the predicted result.
The format of the value is determined by the feature's input format:
If the feature is a scalar value, the attribution value is a
floating number
.If the feature is an array of scalar values, the attribution value is an
array
.If the feature is a struct, the attribution value is a
struct
. The keys in the attribution value struct are the same as the keys in the feature struct. The formats of the values in the attribution struct are determined by the formats of the values in the feature struct.
The ExplanationMetadata.feature_attributions_schema_uri
field, pointed to by the ExplanationSpec
field of the Endpoint.deployed_models
object, points to the schema file that describes the features and their attribution values (if it is populated).
output_index[]
int32
Output only. The index that locates the explained prediction output.
If the prediction output is a scalar value, output_index is not populated. If the prediction output has multiple dimensions, the length of the output_index list is the same as the number of dimensions of the output. The i-th element in output_index is the element index of the i-th dimension of the output vector. Indices start from 0.
output_display_name
string
Output only. The display name of the output identified by output_index
. For example, the predicted class name by a multi-classification Model.
This field is only populated iff the Model predicts display names as a separate field along with the explained output. The predicted display name must has the same shape of the explained output, and can be located using output_index.
approximation_error
double
Output only. Error of feature_attributions
caused by approximation used in the explanation method. Lower value means more precise attributions.
- For Sampled Shapley
attribution
, increasingpath_count
might reduce the error. - For Integrated Gradients
attribution
, increasingstep_count
might reduce the error. - For
XRAI attribution
, increasingstep_count
might reduce the error.
See this introduction for more information.
output_name
string
Output only. Name of the explain output. Specified as the key in ExplanationMetadata.outputs
.
AutomaticResources
A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration. Each Model supporting these resources documents its specific guidelines.
min_replica_count
int32
Immutable. The minimum number of replicas this DeployedModel will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count
, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.
max_replica_count
int32
Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.
AutoscalingMetricSpec
The metric specification that defines the target resource utilization (CPU utilization, accelerator's duty cycle, and so on) for calculating the desired replica count.
metric_name
string
Required. The resource metric name. Supported metrics:
- For Online Prediction:
aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle
aiplatform.googleapis.com/prediction/online/cpu/utilization
target
int32
The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
AvroSource
The storage details for Avro input content.
Required. Google Cloud Storage location.
BatchCancelPipelineJobsOperationMetadata
Runtime operation information for PipelineService.BatchCancelPipelineJobs
.
The common part of the operation metadata.
BatchCancelPipelineJobsRequest
Request message for PipelineService.BatchCancelPipelineJobs
.
parent
string
Required. The name of the PipelineJobs' parent resource. Format: projects/{project}/locations/{location}
names[]
string
Required. The names of the PipelineJobs to cancel. A maximum of 32 PipelineJobs can be cancelled in a batch. Format: projects/{project}/locations/{location}/pipelineJobs/{pipelineJob}
BatchCancelPipelineJobsResponse
Response message for PipelineService.BatchCancelPipelineJobs
.
PipelineJobs cancelled.
BatchCreateFeaturesOperationMetadata
Details of operations that perform batch create Features.
Operation metadata for Feature.
BatchCreateFeaturesRequest
Request message for FeaturestoreService.BatchCreateFeatures
. Request message for FeatureRegistryService.BatchCreateFeatures
.
parent
string
Required. The resource name of the EntityType/FeatureGroup to create the batch of Features under. Format: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entity_type}
projects/{project}/locations/{location}/featureGroups/{feature_group}
Required. The request message specifying the Features to create. All Features must be created under the same parent EntityType / FeatureGroup. The parent
field in each child request message can be omitted. If parent
is set in a child request, then the value must match the parent
value in this request message.
BatchCreateFeaturesResponse
Response message for FeaturestoreService.BatchCreateFeatures
.
The Features created.
BatchCreateTensorboardRunsRequest
Request message for TensorboardService.BatchCreateTensorboardRuns
.
parent
string
Required. The resource name of the TensorboardExperiment to create the TensorboardRuns in. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}
The parent field in the CreateTensorboardRunRequest messages must match this field.
Required. The request message specifying the TensorboardRuns to create. A maximum of 1000 TensorboardRuns can be created in a batch.
BatchCreateTensorboardRunsResponse
Response message for TensorboardService.BatchCreateTensorboardRuns
.
The created TensorboardRuns.
BatchCreateTensorboardTimeSeriesRequest
Request message for TensorboardService.BatchCreateTensorboardTimeSeries
.
parent
string
Required. The resource name of the TensorboardExperiment to create the TensorboardTimeSeries in. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}
The TensorboardRuns referenced by the parent fields in the CreateTensorboardTimeSeriesRequest messages must be sub resources of this TensorboardExperiment.
Required. The request message specifying the TensorboardTimeSeries to create. A maximum of 1000 TensorboardTimeSeries can be created in a batch.
BatchCreateTensorboardTimeSeriesResponse
Response message for TensorboardService.BatchCreateTensorboardTimeSeries
.
The created TensorboardTimeSeries.
BatchDedicatedResources
A description of resources that are used for performing batch operations, are dedicated to a Model, and need manual configuration.
Required. Immutable. The specification of a single machine.
starting_replica_count
int32
Immutable. The number of machine replicas used at the start of the batch operation. If not set, Vertex AI decides starting number, not greater than max_replica_count
max_replica_count
int32
Immutable. The maximum number of machine replicas the batch operation may be scaled to. The default value is 10.
BatchDeletePipelineJobsRequest
Request message for PipelineService.BatchDeletePipelineJobs
.
parent
string
Required. The name of the PipelineJobs' parent resource. Format: projects/{project}/locations/{location}
names[]
string
Required. The names of the PipelineJobs to delete. A maximum of 32 PipelineJobs can be deleted in a batch. Format: projects/{project}/locations/{location}/pipelineJobs/{pipelineJob}
BatchDeletePipelineJobsResponse
Response message for PipelineService.BatchDeletePipelineJobs
.
PipelineJobs deleted.
BatchImportEvaluatedAnnotationsRequest
Request message for ModelService.BatchImportEvaluatedAnnotations
parent
string
Required. The name of the parent ModelEvaluationSlice resource. Format: projects/{project}/locations/{location}/models/{model}/evaluations/{evaluation}/slices/{slice}
Required. Evaluated annotations resource to be imported.
BatchImportEvaluatedAnnotationsResponse
Response message for ModelService.BatchImportEvaluatedAnnotations
imported_evaluated_annotations_count
int32
Output only. Number of EvaluatedAnnotations imported.
BatchImportModelEvaluationSlicesRequest
Request message for ModelService.BatchImportModelEvaluationSlices
parent
string
Required. The name of the parent ModelEvaluation resource. Format: projects/{project}/locations/{location}/models/{model}/evaluations/{evaluation}
Required. Model evaluation slice resource to be imported.
BatchImportModelEvaluationSlicesResponse
Response message for ModelService.BatchImportModelEvaluationSlices
imported_model_evaluation_slices[]
string
Output only. List of imported ModelEvaluationSlice.name
.
BatchMigrateResourcesOperationMetadata
Runtime operation information for MigrationService.BatchMigrateResources
.
The common part of the operation metadata.
Partial results that reflect the latest migration operation progress.
PartialResult
Represents a partial result in batch migration operation for one MigrateResourceRequest
.
It's the same as the value in [MigrateResourceRequest.migrate_resource_requests][].
result
. If the resource's migration is ongoing, none of the result will be set. If the resource's migration is finished, either error or one of the migrated resource name will be filled. result
can be only one of the following:The error result of the migration request in case of failure.
model
string
Migrated model resource name.
dataset
string
Migrated dataset resource name.
BatchMigrateResourcesRequest
Request message for MigrationService.BatchMigrateResources
.
parent
string
Required. The location of the migrated resource will live in. Format: projects/{project}/locations/{location}
Required. The request messages specifying the resources to migrate. They must be in the same location as the destination. Up to 50 resources can be migrated in one batch.
BatchMigrateResourcesResponse
Response message for MigrationService.BatchMigrateResources
.
Successfully migrated resources.
BatchPredictionJob
A job that uses a Model
to produce predictions on multiple input instances
. If predictions for significant portion of the instances fail, the job may finish without attempting predictions for all remaining instances.
name
string
Output only. Resource name of the BatchPredictionJob.
display_name
string
Required. The user-defined name of this BatchPredictionJob.
model
string
The name of the Model resource that produces the predictions via this job, must share the same ancestor Location. Starting this job has no impact on any existing deployments of the Model and their resources. Exactly one of model and unmanaged_container_model must be set.
The model resource name may contain version id or version alias to specify the version. Example: projects/{project}/locations/{location}/models/{model}@2
or projects/{project}/locations/{location}/models/{model}@golden
if no version is specified, the default version will be deployed.
The model resource could also be a publisher model. Example: publishers/{publisher}/models/{model}
or projects/{project}/locations/{location}/publishers/{publisher}/models/{model}
model_version_id
string
Output only. The version ID of the Model that produces the predictions via this job.
Contains model information necessary to perform batch prediction without requiring uploading to model registry. Exactly one of model and unmanaged_container_model must be set.
Required. Input configuration of the instances on which predictions are performed. The schema of any single instance may be specified via the Model's
PredictSchemata's
instance_schema_uri
.
Configuration for how to convert batch prediction input instances to the prediction instances that are sent to the Model.
The parameters that govern the predictions. The schema of the parameters may be specified via the Model's
PredictSchemata's
parameters_schema_uri
.
Required. The Configuration specifying where output predictions should be written. The schema of any single prediction may be specified as a concatenation of Model's
PredictSchemata's
instance_schema_uri
and prediction_schema_uri
.
The config of resources used by the Model during the batch prediction. If the Model supports
DEDICATED_RESOURCES this config may be provided (and the job will use these resources), if the Model doesn't support AUTOMATIC_RESOURCES, this config must be provided.
service_account
string
The service account that the DeployedModel's container runs as. If not specified, a system generated one will be used, which has minimal permissions and the custom container, if used, may not have enough permission to access other Google Cloud resources.
Users deploying the Model must have the iam.serviceAccounts.actAs
permission on this service account.
Immutable. Parameters configuring the batch behavior. Currently only applicable when dedicated_resources
are used (in other cases Vertex AI does the tuning itself).
generate_explanation
bool
Generate explanation with the batch prediction results.
When set to true
, the batch prediction output changes based on the predictions_format
field of the BatchPredictionJob.output_config
object:
bigquery
: output includes a column namedexplanation
. The value is a struct that conforms to theExplanation
object.jsonl
: The JSON objects on each line include an additional entry keyedexplanation
. The value of the entry is a JSON object that conforms to theExplanation
object.csv
: Generating explanations for CSV format is not supported.
If this field is set to true, either the Model.explanation_spec
or explanation_spec
must be populated.
Explanation configuration for this BatchPredictionJob. Can be specified only if generate_explanation
is set to true
.
This value overrides the value of Model.explanation_spec
. All fields of explanation_spec
are optional in the request. If a field of the explanation_spec
object is not populated, the corresponding field of the Model.explanation_spec
object is inherited.
Output only. Information further describing the output of this job.
Output only. The detailed state of the job.
Output only. Only populated when the job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.
Output only. Partial failures encountered. For example, single files that can't be read. This field never exceeds 20 entries. Status details fields contain standard Google Cloud error details.
Output only. Information about resources that had been consumed by this job. Provided in real time at best effort basis, as well as a final value once the job completes.
Note: This field currently may be not populated for batch predictions that use AutoML Models.
Output only. Statistics on completed and failed prediction instances.
Output only. Time when the BatchPredictionJob was created.
Output only. Time when the BatchPredictionJob for the first time entered the JOB_STATE_RUNNING
state.
Output only. Time when the BatchPredictionJob entered any of the following states: JOB_STATE_SUCCEEDED
, JOB_STATE_FAILED
, JOB_STATE_CANCELLED
.
Output only. Time when the BatchPredictionJob was most recently updated.
labels
map<string, string>
The labels with user-defined metadata to organize BatchPredictionJobs.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information and examples of labels.
Customer-managed encryption key options for a BatchPredictionJob. If this is set, then all resources created by the BatchPredictionJob will be encrypted with the provided encryption key.
disable_container_logging
bool
For custom-trained Models and AutoML Tabular Models, the container of the DeployedModel instances will send stderr
and stdout
streams to Cloud Logging by default. Please note that the logs incur cost, which are subject to Cloud Logging pricing.
User can disable container logging by setting this flag to true.
satisfies_pzs
bool
Output only. Reserved for future use.
satisfies_pzi
bool
Output only. Reserved for future use.
InputConfig
Configures the input to BatchPredictionJob
. See Model.supported_input_storage_formats
for Model's supported input formats, and how instances should be expressed via any of them.
instances_format
string
Required. The format in which instances are given, must be one of the Model's
supported_input_storage_formats
.
source
. Required. The source of the input. source
can be only one of the following:The Cloud Storage location for the input instances.
The BigQuery location of the input table. The schema of the table should be in the format described by the given context OpenAPI Schema, if one is provided. The table may contain additional columns that are not described by the schema, and they will be ignored.
InstanceConfig
Configuration defining how to transform batch prediction input instances to the instances that the Model accepts.
instance_type
string
The format of the instance that the Model accepts. Vertex AI will convert compatible batch prediction input instance formats
to the specified format.
Supported values are:
object
: Each input is converted to JSON object format.- For
bigquery
, each row is converted to an object. - For
jsonl
, each line of the JSONL input must be an object. - Does not apply to
csv
,file-list
,tf-record
, ortf-record-gzip
.
- For
array
: Each input is converted to JSON array format.- For
bigquery
, each row is converted to an array. The order of columns is determined by the BigQuery column order, unlessincluded_fields
is populated.included_fields
must be populated for specifying field orders. - For
jsonl
, if each line of the JSONL input is an object,included_fields
must be populated for specifying field orders. - Does not apply to
csv
,file-list
,tf-record
, ortf-record-gzip
.
- For
If not specified, Vertex AI converts the batch prediction input as follows:
- For
bigquery
andcsv
, the behavior is the same asarray
. The order of columns is the same as defined in the file or table, unlessincluded_fields
is populated. - For
jsonl
, the prediction instance format is determined by each line of the input. - For
tf-record
/tf-record-gzip
, each record will be converted to an object in the format of{"b64": <value>}
, where<value>
is the Base64-encoded string of the content of the record. - For
file-list
, each file in the list will be converted to an object in the format of{"b64": <value>}
, where<value>
is the Base64-encoded string of the content of the file.
key_field
string
The name of the field that is considered as a key.
The values identified by the key field is not included in the transformed instances that is sent to the Model. This is similar to specifying this name of the field in excluded_fields
. In addition, the batch prediction output will not include the instances. Instead the output will only include the value of the key field, in a field named key
in the output:
- For
jsonl
output format, the output will have akey
field instead of theinstance
field. - For
csv
/bigquery
output format, the output will have have akey
column instead of the instance feature columns.
The input must be JSONL with objects at each line, CSV, BigQuery or TfRecord.
included_fields[]
string
Fields that will be included in the prediction instance that is sent to the Model.
If instance_type
is array
, the order of field names in included_fields also determines the order of the values in the array.
When included_fields is populated, excluded_fields
must be empty.
The input must be JSONL with objects at each line, BigQuery or TfRecord.
excluded_fields[]
string
Fields that will be excluded in the prediction instance that is sent to the Model.
Excluded will be attached to the batch prediction output if key_field
is not specified.
When excluded_fields is populated, included_fields
must be empty.
The input must be JSONL with objects at each line, BigQuery or TfRecord.
OutputConfig
Configures the output of BatchPredictionJob
. See Model.supported_output_storage_formats
for supported output formats, and how predictions are expressed via any of them.
predictions_format
string
Required. The format in which Vertex AI gives the predictions, must be one of the Model's
supported_output_storage_formats
.
destination
. Required. The destination of the output. destination
can be only one of the following:The Cloud Storage location of the directory where the output is to be written to. In the given directory a new directory is created. Its name is prediction-<model-display-name>-<job-create-time>
, where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. Inside of it files predictions_0001.<extension>
, predictions_0002.<extension>
, ..., predictions_N.<extension>
are created where <extension>
depends on chosen predictions_format
, and N may equal 0001 and depends on the total number of successfully predicted instances. If the Model has both instance
and prediction
schemata defined then each such file contains predictions as per the predictions_format
. If prediction for any instance failed (partially or completely), then an additional errors_0001.<extension>
, errors_0002.<extension>
,..., errors_N.<extension>
files are created (N depends on total number of failed predictions). These files contain the failed instances, as per their schema, followed by an additional error
field which as value has google.rpc.Status
containing only code
and message
fields.
The BigQuery project or dataset location where the output is to be written to. If project is provided, a new dataset is created with name prediction_<model-display-name>_<job-create-time>
where predictions
, and errors
. If the Model has both instance
and prediction
schemata defined then the tables have columns as follows: The predictions
table contains instances for which the prediction succeeded, it has columns as per a concatenation of the Model's instance and prediction schemata. The errors
table contains rows for which the prediction has failed, it has instance columns, as per the instance schema, followed by a single "errors" column, which as values has google.rpc.Status
represented as a STRUCT, and containing only code
and message
.
OutputInfo
Further describes this job's output. Supplements output_config
.
bigquery_output_table
string
Output only. The name of the BigQuery table created, in predictions_<timestamp>
format, into which the prediction output is written. Can be used by UI to generate the BigQuery output path, for example.
output_location
. The output location into which prediction output is written. output_location
can be only one of the following:gcs_output_directory
string
Output only. The full path of the Cloud Storage directory created, into which the prediction output is written.
bigquery_output_dataset
string
Output only. The path of the BigQuery dataset created, in bq://projectId.bqDatasetId
format, into which the prediction output is written.
BatchReadFeatureValuesOperationMetadata
Details of operations that batch reads Feature values.
Operation metadata for Featurestore batch read Features values.
BatchReadFeatureValuesRequest
Request message for FeaturestoreService.BatchReadFeatureValues
.
featurestore
string
Required. The resource name of the Featurestore from which to query Feature values. Format: projects/{project}/locations/{location}/featurestores/{featurestore}
Required. Specifies output location and format.
When not empty, the specified fields in the *_read_instances source will be joined as-is in the output, in addition to those fields from the Featurestore Entity.
For BigQuery source, the type of the pass-through values will be automatically inferred. For CSV source, the pass-through values will be passed as opaque bytes.
Required. Specifies EntityType grouping Features to read values of and settings.
Optional. Excludes Feature values with feature generation timestamp before this timestamp. If not set, retrieve oldest values kept in Feature Store. Timestamp, if present, must not have higher than millisecond precision.
Union field read_option
.
read_option
can be only one of the following:
Each read instance consists of exactly one read timestamp and one or more entity IDs identifying entities of the corresponding EntityTypes whose Features are requested.
Each output instance contains Feature values of requested entities concatenated together as of the read time.
An example read instance may be foo_entity_id, bar_entity_id,
2020-01-01T10:00:00.123Z
.
An example output instance may be foo_entity_id, bar_entity_id,
2020-01-01T10:00:00.123Z, foo_entity_feature1_value,
bar_entity_feature2_value
.
Timestamp in each read instance must be millisecond-aligned.
csv_read_instances
are read instances stored in a plain-text CSV file. The header should be: [ENTITY_TYPE_ID1], [ENTITY_TYPE_ID2], ..., timestamp
The columns can be in any order.
Values in the timestamp column must use the RFC 3339 format, e.g. 2012-07-30T10:43:17.123Z
.
Similar to csv_read_instances, but from BigQuery source.
EntityTypeSpec
Selects Features of an EntityType to read values of and specifies read settings.
entity_type_id
string
Required. ID of the EntityType to select Features. The EntityType id is the entity_type_id
specified during EntityType creation.
Required. Selectors choosing which Feature values to read from the EntityType.
Per-Feature settings for the batch read.
PassThroughField
Describe pass-through fields in read_instance source.
field_name
string
Required. The name of the field in the CSV header or the name of the column in BigQuery table. The naming restriction is the same as Feature.name
.
BatchReadFeatureValuesResponse
This type has no fields.
Response message for FeaturestoreService.BatchReadFeatureValues
.
BatchReadTensorboardTimeSeriesDataRequest
Request message for TensorboardService.BatchReadTensorboardTimeSeriesData
.
tensorboard
string
Required. The resource name of the Tensorboard containing TensorboardTimeSeries to read data from. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
. The TensorboardTimeSeries referenced by time_series
must be sub resources of this Tensorboard.
time_series[]
string
Required. The resource names of the TensorboardTimeSeries to read data from. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}/runs/{run}/timeSeries/{time_series}
BatchReadTensorboardTimeSeriesDataResponse
Response message for TensorboardService.BatchReadTensorboardTimeSeriesData
.
The returned time series data.
BigQueryDestination
The BigQuery location for the output content.
output_uri
string
Required. BigQuery URI to a project or table, up to 2000 characters long.
When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist.
Accepted forms:
- BigQuery path. For example:
bq://projectId
orbq://projectId.bqDatasetId
orbq://projectId.bqDatasetId.bqTableId
.
BigQuerySource
The BigQuery location for the input content.
input_uri
string
Required. BigQuery URI to a table, up to 2000 characters long. Accepted forms:
- BigQuery path. For example:
bq://projectId.bqDatasetId.bqTableId
.
BleuInput
Input for bleu metric.
Required. Spec for bleu score metric.
Required. Repeated bleu instances.
BleuInstance
Spec for bleu instance.
prediction
string
Required. Output of the evaluated model.
reference
string
Required. Ground truth used to compare against the prediction.
BleuMetricValue
Bleu metric value for an instance.
score
float
Output only. Bleu score.
BleuResults
Results for bleu metric.
Output only. Bleu metric values.
BleuSpec
Spec for bleu score metric - calculates the precision of n-grams in the prediction as compared to reference - returns a score ranging between 0 to 1.
use_effective_order
bool
Optional. Whether to use_effective_order to compute bleu score.
Blob
Content blob.
It's preferred to send as text
directly rather than raw bytes.
mime_type
string
Required. The IANA standard MIME type of the source data.
data
bytes
Required. Raw bytes.
BlurBaselineConfig
Config for blur baseline.
When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383
max_blur_sigma
float
The standard deviation of the blur kernel for the blurred baseline. The same blurring parameter is used for both the height and the width dimension. If not set, the method defaults to the zero (i.e. black for images) baseline.
BoolArray
A list of boolean values.
values[]
bool
A list of bool values.
CancelBatchPredictionJobRequest
Request message for JobService.CancelBatchPredictionJob
.
name
string
Required. The name of the BatchPredictionJob to cancel. Format: projects/{project}/locations/{location}/batchPredictionJobs/{batch_prediction_job}
CancelCustomJobRequest
Request message for JobService.CancelCustomJob
.
name
string
Required. The name of the CustomJob to cancel. Format: projects/{project}/locations/{location}/customJobs/{custom_job}
CancelHyperparameterTuningJobRequest
Request message for JobService.CancelHyperparameterTuningJob
.
name
string
Required. The name of the HyperparameterTuningJob to cancel. Format: projects/{project}/locations/{location}/hyperparameterTuningJobs/{hyperparameter_tuning_job}
CancelNasJobRequest
Request message for JobService.CancelNasJob
.
name
string
Required. The name of the NasJob to cancel. Format: projects/{project}/locations/{location}/nasJobs/{nas_job}
CancelPipelineJobRequest
Request message for PipelineService.CancelPipelineJob
.
name
string
Required. The name of the PipelineJob to cancel. Format: projects/{project}/locations/{location}/pipelineJobs/{pipeline_job}
CancelTrainingPipelineRequest
Request message for PipelineService.CancelTrainingPipeline
.
name
string
Required. The name of the TrainingPipeline to cancel. Format: projects/{project}/locations/{location}/trainingPipelines/{training_pipeline}
CancelTuningJobRequest
Request message for GenAiTuningService.CancelTuningJob
.
name
string
Required. The name of the TuningJob to cancel. Format: projects/{project}/locations/{location}/tuningJobs/{tuning_job}
Candidate
A response candidate generated from the model.
index
int32
Output only. Index of the candidate.
Output only. Content parts of the candidate.
avg_logprobs
double
Output only. Average log probability score of the candidate.
Output only. Log-likelihood scores for the response tokens and top tokens
Output only. The reason why the model stopped generating tokens. If empty, the model has not stopped generating the tokens.
Output only. List of ratings for the safety of a response candidate.
There is at most one rating per category.
Output only. Source attribution of the generated content.
Output only. Metadata specifies sources used to ground generated content.
finish_message
string
Output only. Describes the reason the mode stopped generating tokens in more detail. This is only filled when finish_reason
is set.
FinishReason
The reason why the model stopped generating tokens. If empty, the model has not stopped generating the tokens.
Enums | |
---|---|
FINISH_REASON_UNSPECIFIED |
The finish reason is unspecified. |
STOP |
Token generation reached a natural stopping point or a configured stop sequence. |
MAX_TOKENS |
Token generation reached the configured maximum output tokens. |
SAFETY |
Token generation stopped because the content potentially contains safety violations. NOTE: When streaming, content is empty if content filters blocks the output. |
RECITATION |
Token generation stopped because the content potentially contains copyright violations. |
OTHER |
All other reasons that stopped the token generation. |
BLOCKLIST |
Token generation stopped because the content contains forbidden terms. |
PROHIBITED_CONTENT |
Token generation stopped for potentially containing prohibited content. |
SPII |
Token generation stopped because the content potentially contains Sensitive Personally Identifiable Information (SPII). |
MALFORMED_FUNCTION_CALL |
The function call generated by the model is invalid. |
CheckTrialEarlyStoppingStateMetatdata
This message will be placed in the metadata field of a google.longrunning.Operation associated with a CheckTrialEarlyStoppingState request.
Operation metadata for suggesting Trials.
study
string
The name of the Study that the Trial belongs to.
trial
string
The Trial name.
CheckTrialEarlyStoppingStateRequest
Request message for VizierService.CheckTrialEarlyStoppingState
.
trial_name
string
Required. The Trial's name. Format: projects/{project}/locations/{location}/studies/{study}/trials/{trial}
CheckTrialEarlyStoppingStateResponse
Response message for VizierService.CheckTrialEarlyStoppingState
.
should_stop
bool
True if the Trial should stop.
Citation
Source attributions for content.
start_index
int32
Output only. Start index into the content.
end_index
int32
Output only. End index into the content.
uri
string
Output only. Url reference of the attribution.
title
string
Output only. Title of the attribution.
license
string
Output only. License of the attribution.
Output only. Publication date of the attribution.
CitationMetadata
A collection of source attributions for a piece of content.
Output only. List of citations.
CoherenceInput
Input for coherence metric.
Required. Spec for coherence score metric.
Required. Coherence instance.
CoherenceInstance
Spec for coherence instance.
prediction
string
Required. Output of the evaluated model.
CoherenceResult
Spec for coherence result.
explanation
string
Output only. Explanation for coherence score.
score
float
Output only. Coherence score.
confidence
float
Output only. Confidence for coherence score.
CoherenceSpec
Spec for coherence score metric.
version
int32
Optional. Which version to use for evaluation.
CompleteTrialRequest
Request message for VizierService.CompleteTrial
.
name
string
Required. The Trial's name. Format: projects/{project}/locations/{location}/studies/{study}/trials/{trial}
Optional. If provided, it will be used as the completed Trial's final_measurement; Otherwise, the service will auto-select a previously reported measurement as the final-measurement
trial_infeasible
bool
Optional. True if the Trial cannot be run with the given Parameter, and final_measurement will be ignored.
infeasible_reason
string
Optional. A human readable reason why the trial was infeasible. This should only be provided if trial_infeasible
is true.
CompletionStats
Success and error statistics of processing multiple entities (for example, DataItems or structured data rows) in batch.
successful_count
int64
Output only. The number of entities that had been processed successfully.
failed_count
int64
Output only. The number of entities for which any error was encountered.
incomplete_count
int64
Output only. In cases when enough errors are encountered a job, pipeline, or operation may be failed as a whole. Below is the number of entities for which the processing had not been finished (either in successful or failed state). Set to -1 if the number is unknown (for example, the operation failed before the total entity number could be collected).
successful_forecast_point_count
int64
Output only. The number of the successful forecast points that are generated by the forecasting model. This is ONLY used by the forecasting batch prediction.
ComputeTokensRequest
Request message for ComputeTokens RPC call.
endpoint
string
Required. The name of the Endpoint requested to get lists of tokens and token ids.
Optional. The instances that are the input to token computing API call. Schema is identical to the prediction schema of the text model, even for the non-text models, like chat models, or Codey models.
model
string
Optional. The name of the publisher model requested to serve the prediction. Format: projects/{project}/locations/{location}/publishers/*/models/*
Optional. Input content.
ComputeTokensResponse
Response message for ComputeTokens RPC call.
Lists of tokens info from the input. A ComputeTokensRequest could have multiple instances with a prompt in each instance. We also need to return lists of tokens info for the request with multiple instances.
ContainerRegistryDestination
The Container Registry location for the container image.
output_uri
string
Required. Container Registry URI of a container image. Only Google Container Registry and Artifact Registry are supported now. Accepted forms:
Google Container Registry path. For example:
gcr.io/projectId/imageName:tag
.Artifact Registry path. For example:
us-central1-docker.pkg.dev/projectId/repoName/imageName:tag
.
If a tag is not specified, "latest" will be used as the default tag.
ContainerSpec
The spec of a Container.
image_uri
string
Required. The URI of a container image in the Container Registry that is to be run on each worker replica.
command[]
string
The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.
args[]
string
The arguments to be passed when starting the container.
Environment variables to be passed to the container. Maximum limit is 100.
Content
The base structured datatype containing multi-part content of a message.
A Content
includes a role
field designating the producer of the Content
and a parts
field containing multi-part data that contains the content of the message turn.
role
string
Optional. The producer of the content. Must be either 'user' or 'model'.
Useful to set for multi-turn conversations, otherwise can be left blank or unset.
Required. Ordered Parts
that constitute a single message. Parts may have different IANA MIME types.
Context
Instance of a general context.
name
string
Immutable. The resource name of the Context.
display_name
string
User provided display name of the Context. May be up to 128 Unicode characters.
etag
string
An eTag used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
labels
map<string, string>
The labels with user-defined metadata to organize your Contexts.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Context (System labels are excluded).
Output only. Timestamp when this Context was created.
Output only. Timestamp when this Context was last updated.
parent_contexts[]
string
Output only. A list of resource names of Contexts that are parents of this Context. A Context may have at most 10 parent_contexts.
schema_title
string
The title of the schema describing the metadata.
Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.
schema_version
string
The version of the schema in schema_name to use.
Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.
Properties of the Context. Top level metadata keys' heading and trailing spaces will be trimmed. The size of this field should not exceed 200KB.
description
string
Description of the Context
CopyModelOperationMetadata
Details of ModelService.CopyModel
operation.
The common part of the operation metadata.
CopyModelRequest
Request message for ModelService.CopyModel
.
parent
string
Required. The resource name of the Location into which to copy the Model. Format: projects/{project}/locations/{location}
source_model
string
Required. The resource name of the Model to copy. That Model must be in the same Project. Format: projects/{project}/locations/{location}/models/{model}
Customer-managed encryption key options. If this is set, then the Model copy will be encrypted with the provided encryption key.
destination_model
. If both fields are unset, a new Model will be created with a generated ID. destination_model
can be only one of the following:model_id
string
Optional. Copy source_model into a new Model with this ID. The ID will become the final component of the model resource name.
This value may be up to 63 characters, and valid characters are [a-z0-9_-]
. The first character cannot be a number or hyphen.
parent_model
string
Optional. Specify this field to copy source_model into this existing Model as a new version. Format: projects/{project}/locations/{location}/models/{model}
CopyModelResponse
Response message of ModelService.CopyModel
operation.
model
string
The name of the copied Model resource. Format: projects/{project}/locations/{location}/models/{model}
model_version_id
string
Output only. The version ID of the model that is copied.
CountTokensRequest
Request message for [PredictionService.CountTokens][].
endpoint
string
Required. The name of the Endpoint requested to perform token counting. Format: projects/{project}/locations/{location}/endpoints/{endpoint}
model
string
Optional. The name of the publisher model requested to serve the prediction. Format: projects/{project}/locations/{location}/publishers/*/models/*
Optional. The instances that are the input to token counting call. Schema is identical to the prediction schema of the underlying model.
Optional. Input content.
Optional. A list of Tools
the model may use to generate the next response.
A Tool
is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model.
Optional. The user provided system instructions for the model. Note: only text should be used in parts and content in each part will be in a separate paragraph.
Optional. Generation config that the model will use to generate the response.
CountTokensResponse
Response message for [PredictionService.CountTokens][].
total_tokens
int32
The total number of tokens counted across all instances from the request.
total_billable_characters
int32
The total number of billable characters counted across all instances from the request.
CreateArtifactRequest
Request message for MetadataService.CreateArtifact
.
parent
string
Required. The resource name of the MetadataStore where the Artifact should be created. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}
Required. The Artifact to create.
artifact_id
string
The {artifact} portion of the resource name with the format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/artifacts/{artifact}
If not provided, the Artifact's ID will be a UUID generated by the service. Must be 4-128 characters in length. Valid characters are /[a-z][0-9]-/
. Must be unique across all Artifacts in the parent MetadataStore. (Otherwise the request will fail with ALREADY_EXISTS, or PERMISSION_DENIED if the caller can't view the preexisting Artifact.)
CreateBatchPredictionJobRequest
Request message for JobService.CreateBatchPredictionJob
.
parent
string
Required. The resource name of the Location to create the BatchPredictionJob in. Format: projects/{project}/locations/{location}
Required. The BatchPredictionJob to create.
CreateContextRequest
Request message for MetadataService.CreateContext
.
parent
string
Required. The resource name of the MetadataStore where the Context should be created. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}
Required. The Context to create.
context_id
string
The {context} portion of the resource name with the format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/contexts/{context}
. If not provided, the Context's ID will be a UUID generated by the service. Must be 4-128 characters in length. Valid characters are /[a-z][0-9]-/
. Must be unique across all Contexts in the parent MetadataStore. (Otherwise the request will fail with ALREADY_EXISTS, or PERMISSION_DENIED if the caller can't view the preexisting Context.)
CreateCustomJobRequest
Request message for JobService.CreateCustomJob
.
parent
string
Required. The resource name of the Location to create the CustomJob in. Format: projects/{project}/locations/{location}
Required. The CustomJob to create.
CreateDatasetOperationMetadata
Runtime operation information for DatasetService.CreateDataset
.
The operation generic information.
CreateDatasetRequest
Request message for DatasetService.CreateDataset
.
parent
string
Required. The resource name of the Location to create the Dataset in. Format: projects/{project}/locations/{location}
Required. The Dataset to create.
CreateDatasetVersionOperationMetadata
Runtime operation information for DatasetService.CreateDatasetVersion
.
The common part of the operation metadata.
CreateDatasetVersionRequest
Request message for DatasetService.CreateDatasetVersion
.
parent
string
Required. The name of the Dataset resource. Format: projects/{project}/locations/{location}/datasets/{dataset}
Required. The version to be created. The same CMEK policies with the original Dataset will be applied the dataset version. So here we don't need to specify the EncryptionSpecType here.
CreateDeploymentResourcePoolOperationMetadata
Runtime operation information for CreateDeploymentResourcePool method.
The operation generic information.
CreateDeploymentResourcePoolRequest
Request message for CreateDeploymentResourcePool method.
parent
string
Required. The parent location resource where this DeploymentResourcePool will be created. Format: projects/{project}/locations/{location}
Required. The DeploymentResourcePool to create.
deployment_resource_pool_id
string
Required. The ID to use for the DeploymentResourcePool, which will become the final component of the DeploymentResourcePool's resource name.
The maximum length is 63 characters, and valid characters are /^[a-z]([a-z0-9-]{0,61}[a-z0-9])?$/
.
CreateEndpointOperationMetadata
Runtime operation information for EndpointService.CreateEndpoint
.
The operation generic information.
CreateEndpointRequest
Request message for EndpointService.CreateEndpoint
.
parent
string
Required. The resource name of the Location to create the Endpoint in. Format: projects/{project}/locations/{location}
Required. The Endpoint to create.
endpoint_id
string
Immutable. The ID to use for endpoint, which will become the final component of the endpoint resource name. If not provided, Vertex AI will generate a value for this ID.
If the first character is a letter, this value may be up to 63 characters, and valid characters are [a-z0-9-]
. The last character must be a letter or number.
If the first character is a number, this value may be up to 9 characters, and valid characters are [0-9]
with no leading zeros.
When using HTTP/JSON, this field is populated based on a query string argument, such as ?endpoint_id=12345
. This is the fallback for fields that are not included in either the URI or the body.
CreateEntityTypeOperationMetadata
Details of operations that perform create EntityType.
Operation metadata for EntityType.
CreateEntityTypeRequest
Request message for FeaturestoreService.CreateEntityType
.
parent
string
Required. The resource name of the Featurestore to create EntityTypes. Format: projects/{project}/locations/{location}/featurestores/{featurestore}
The EntityType to create.
entity_type_id
string
Required. The ID to use for the EntityType, which will become the final component of the EntityType's resource name.
This value may be up to 60 characters, and valid characters are [a-z0-9_]
. The first character cannot be a number.
The value must be unique within a featurestore.
CreateExecutionRequest
Request message for MetadataService.CreateExecution
.
parent
string
Required. The resource name of the MetadataStore where the Execution should be created. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}
Required. The Execution to create.
execution_id
string
The {execution} portion of the resource name with the format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/executions/{execution}
If not provided, the Execution's ID will be a UUID generated by the service. Must be 4-128 characters in length. Valid characters are /[a-z][0-9]-/
. Must be unique across all Executions in the parent MetadataStore. (Otherwise the request will fail with ALREADY_EXISTS, or PERMISSION_DENIED if the caller can't view the preexisting Execution.)
CreateFeatureGroupOperationMetadata
Details of operations that perform create FeatureGroup.
Operation metadata for FeatureGroup.
CreateFeatureGroupRequest
Request message for FeatureRegistryService.CreateFeatureGroup
.
parent
string
Required. The resource name of the Location to create FeatureGroups. Format: projects/{project}/locations/{location}
Required. The FeatureGroup to create.
feature_group_id
string
Required. The ID to use for this FeatureGroup, which will become the final component of the FeatureGroup's resource name.
This value may be up to 128 characters, and valid characters are [a-z0-9_]
. The first character cannot be a number.
The value must be unique within the project and location.
CreateFeatureOnlineStoreOperationMetadata
Details of operations that perform create FeatureOnlineStore.
Operation metadata for FeatureOnlineStore.
CreateFeatureOnlineStoreRequest
Request message for FeatureOnlineStoreAdminService.CreateFeatureOnlineStore
.
parent
string
Required. The resource name of the Location to create FeatureOnlineStores. Format: projects/{project}/locations/{location}
Required. The FeatureOnlineStore to create.
feature_online_store_id
string
Required. The ID to use for this FeatureOnlineStore, which will become the final component of the FeatureOnlineStore's resource name.
This value may be up to 60 characters, and valid characters are [a-z0-9_]
. The first character cannot be a number.
The value must be unique within the project and location.
CreateFeatureOperationMetadata
Details of operations that perform create Feature.
Operation metadata for Feature.
CreateFeatureRequest
Request message for FeaturestoreService.CreateFeature
. Request message for FeatureRegistryService.CreateFeature
.
parent
string
Required. The resource name of the EntityType or FeatureGroup to create a Feature. Format for entity_type as parent: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entity_type}
Format for feature_group as parent: projects/{project}/locations/{location}/featureGroups/{feature_group}
Required. The Feature to create.
feature_id
string
Required. The ID to use for the Feature, which will become the final component of the Feature's resource name.
This value may be up to 128 characters, and valid characters are [a-z0-9_]
. The first character cannot be a number.
The value must be unique within an EntityType/FeatureGroup.
CreateFeatureViewOperationMetadata
Details of operations that perform create FeatureView.
Operation metadata for FeatureView Create.
CreateFeatureViewRequest
Request message for FeatureOnlineStoreAdminService.CreateFeatureView
.
parent
string
Required. The resource name of the FeatureOnlineStore to create FeatureViews. Format: projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}
Required. The FeatureView to create.
feature_view_id
string
Required. The ID to use for the FeatureView, which will become the final component of the FeatureView's resource name.
This value may be up to 60 characters, and valid characters are [a-z0-9_]
. The first character cannot be a number.
The value must be unique within a FeatureOnlineStore.
run_sync_immediately
bool
Immutable. If set to true, one on demand sync will be run immediately, regardless whether the FeatureView.sync_config
is configured or not.
CreateFeaturestoreOperationMetadata
Details of operations that perform create Featurestore.
Operation metadata for Featurestore.
CreateFeaturestoreRequest
Request message for FeaturestoreService.CreateFeaturestore
.
parent
string
Required. The resource name of the Location to create Featurestores. Format: projects/{project}/locations/{location}
Required. The Featurestore to create.
featurestore_id
string
Required. The ID to use for this Featurestore, which will become the final component of the Featurestore's resource name.
This value may be up to 60 characters, and valid characters are [a-z0-9_]
. The first character cannot be a number.
The value must be unique within the project and location.
CreateHyperparameterTuningJobRequest
Request message for JobService.CreateHyperparameterTuningJob
.
parent
string
Required. The resource name of the Location to create the HyperparameterTuningJob in. Format: projects/{project}/locations/{location}
Required. The HyperparameterTuningJob to create.
CreateIndexEndpointOperationMetadata
Runtime operation information for IndexEndpointService.CreateIndexEndpoint
.
The operation generic information.
CreateIndexEndpointRequest
Request message for IndexEndpointService.CreateIndexEndpoint
.
parent
string
Required. The resource name of the Location to create the IndexEndpoint in. Format: projects/{project}/locations/{location}
Required. The IndexEndpoint to create.
CreateIndexOperationMetadata
Runtime operation information for IndexService.CreateIndex
.
The operation generic information.
The operation metadata with regard to Matching Engine Index operation.
CreateIndexRequest
Request message for IndexService.CreateIndex
.
parent
string
Required. The resource name of the Location to create the Index in. Format: projects/{project}/locations/{location}
Required. The Index to create.
CreateMetadataSchemaRequest
Request message for MetadataService.CreateMetadataSchema
.
parent
string
Required. The resource name of the MetadataStore where the MetadataSchema should be created. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}
Required. The MetadataSchema to create.
metadata_schema_id
string
The {metadata_schema} portion of the resource name with the format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/metadataSchemas/{metadataschema}
If not provided, the MetadataStore's ID will be a UUID generated by the service. Must be 4-128 characters in length. Valid characters are /[a-z][0-9]-/
. Must be unique across all MetadataSchemas in the parent Location. (Otherwise the request will fail with ALREADY_EXISTS, or PERMISSION_DENIED if the caller can't view the preexisting MetadataSchema.)
CreateMetadataStoreOperationMetadata
Details of operations that perform MetadataService.CreateMetadataStore
.
Operation metadata for creating a MetadataStore.
CreateMetadataStoreRequest
Request message for MetadataService.CreateMetadataStore
.
parent
string
Required. The resource name of the Location where the MetadataStore should be created. Format: projects/{project}/locations/{location}/
Required. The MetadataStore to create.
metadata_store_id
string
The {metadatastore} portion of the resource name with the format: projects/{project}/locations/{location}/metadataStores/{metadatastore}
If not provided, the MetadataStore's ID will be a UUID generated by the service. Must be 4-128 characters in length. Valid characters are /[a-z][0-9]-/
. Must be unique across all MetadataStores in the parent Location. (Otherwise the request will fail with ALREADY_EXISTS, or PERMISSION_DENIED if the caller can't view the preexisting MetadataStore.)
CreateModelDeploymentMonitoringJobRequest
Request message for JobService.CreateModelDeploymentMonitoringJob
.
parent
string
Required. The parent of the ModelDeploymentMonitoringJob. Format: projects/{project}/locations/{location}
Required. The ModelDeploymentMonitoringJob to create
CreateNasJobRequest
Request message for JobService.CreateNasJob
.
parent
string
Required. The resource name of the Location to create the NasJob in. Format: projects/{project}/locations/{location}
Required. The NasJob to create.
CreateNotebookExecutionJobOperationMetadata
Metadata information for NotebookService.CreateNotebookExecutionJob
.
The operation generic information.
progress_message
string
A human-readable message that shows the intermediate progress details of NotebookRuntime.
CreateNotebookExecutionJobRequest
Request message for [NotebookService.CreateNotebookExecutionJob]
parent
string
Required. The resource name of the Location to create the NotebookExecutionJob. Format: projects/{project}/locations/{location}
Required. The NotebookExecutionJob to create.
notebook_execution_job_id
string
Optional. User specified ID for the NotebookExecutionJob.
CreateNotebookRuntimeTemplateOperationMetadata
Metadata information for NotebookService.CreateNotebookRuntimeTemplate
.
The operation generic information.
CreateNotebookRuntimeTemplateRequest
Request message for NotebookService.CreateNotebookRuntimeTemplate
.
parent
string
Required. The resource name of the Location to create the NotebookRuntimeTemplate. Format: projects/{project}/locations/{location}
Required. The NotebookRuntimeTemplate to create.
notebook_runtime_template_id
string
Optional. User specified ID for the notebook runtime template.
CreatePersistentResourceOperationMetadata
Details of operations that perform create PersistentResource.
Operation metadata for PersistentResource.
progress_message
string
Progress Message for Create LRO
CreatePersistentResourceRequest
Request message for PersistentResourceService.CreatePersistentResource
.
parent
string
Required. The resource name of the Location to create the PersistentResource in. Format: projects/{project}/locations/{location}
Required. The PersistentResource to create.
persistent_resource_id
string
Required. The ID to use for the PersistentResource, which become the final component of the PersistentResource's resource name.
The maximum length is 63 characters, and valid characters are /^[a-z]([a-z0-9-]{0,61}[a-z0-9])?$/
.
CreatePipelineJobRequest
Request message for PipelineService.CreatePipelineJob
.
parent
string
Required. The resource name of the Location to create the PipelineJob in. Format: projects/{project}/locations/{location}
Required. The PipelineJob to create.
pipeline_job_id
string
The ID to use for the PipelineJob, which will become the final component of the PipelineJob name. If not provided, an ID will be automatically generated.
This value should be less than 128 characters, and valid characters are /[a-z][0-9]-/
.
CreateRegistryFeatureOperationMetadata
Details of operations that perform create FeatureGroup.
Operation metadata for Feature.
CreateScheduleRequest
Request message for ScheduleService.CreateSchedule
.
parent
string
Required. The resource name of the Location to create the Schedule in. Format: projects/{project}/locations/{location}
Required. The Schedule to create.
CreateSpecialistPoolOperationMetadata
Runtime operation information for SpecialistPoolService.CreateSpecialistPool
.
The operation generic information.
CreateSpecialistPoolRequest
Request message for SpecialistPoolService.CreateSpecialistPool
.
parent
string
Required. The parent Project name for the new SpecialistPool. The form is projects/{project}/locations/{location}
.
Required. The SpecialistPool to create.
CreateStudyRequest
Request message for VizierService.CreateStudy
.
parent
string
Required. The resource name of the Location to create the CustomJob in. Format: projects/{project}/locations/{location}
Required. The Study configuration used to create the Study.
CreateTensorboardExperimentRequest
Request message for TensorboardService.CreateTensorboardExperiment
.
parent
string
Required. The resource name of the Tensorboard to create the TensorboardExperiment in. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
The TensorboardExperiment to create.
tensorboard_experiment_id
string
Required. The ID to use for the Tensorboard experiment, which becomes the final component of the Tensorboard experiment's resource name.
This value should be 1-128 characters, and valid characters are /[a-z][0-9]-/
.
CreateTensorboardOperationMetadata
Details of operations that perform create Tensorboard.
Operation metadata for Tensorboard.
CreateTensorboardRequest
Request message for TensorboardService.CreateTensorboard
.
parent
string
Required. The resource name of the Location to create the Tensorboard in. Format: projects/{project}/locations/{location}
Required. The Tensorboard to create.
CreateTensorboardRunRequest
Request message for TensorboardService.CreateTensorboardRun
.
parent
string
Required. The resource name of the TensorboardExperiment to create the TensorboardRun in. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}
Required. The TensorboardRun to create.
tensorboard_run_id
string
Required. The ID to use for the Tensorboard run, which becomes the final component of the Tensorboard run's resource name.
This value should be 1-128 characters, and valid characters are /[a-z][0-9]-/
.
CreateTensorboardTimeSeriesRequest
Request message for TensorboardService.CreateTensorboardTimeSeries
.
parent
string
Required. The resource name of the TensorboardRun to create the TensorboardTimeSeries in. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}/runs/{run}
tensorboard_time_series_id
string
Optional. The user specified unique ID to use for the TensorboardTimeSeries, which becomes the final component of the TensorboardTimeSeries's resource name. This value should match "[a-z0-9][a-z0-9-]{0, 127}"
Required. The TensorboardTimeSeries to create.
CreateTrainingPipelineRequest
Request message for PipelineService.CreateTrainingPipeline
.
parent
string
Required. The resource name of the Location to create the TrainingPipeline in. Format: projects/{project}/locations/{location}
Required. The TrainingPipeline to create.
CreateTrialRequest
Request message for VizierService.CreateTrial
.
parent
string
Required. The resource name of the Study to create the Trial in. Format: projects/{project}/locations/{location}/studies/{study}
Required. The Trial to create.
CreateTuningJobRequest
Request message for GenAiTuningService.CreateTuningJob
.
parent
string
Required. The resource name of the Location to create the TuningJob in. Format: projects/{project}/locations/{location}
Required. The TuningJob to create.
CsvDestination
The storage details for CSV output content.
Required. Google Cloud Storage location.
CsvSource
The storage details for CSV input content.
Required. Google Cloud Storage location.
CustomJob
Represents a job that runs custom workloads such as a Docker container or a Python package. A CustomJob can have multiple worker pools and each worker pool can have its own machine and input spec. A CustomJob will be cleaned up once the job enters terminal state (failed or succeeded).
name
string
Output only. Resource name of a CustomJob.
display_name
string
Required. The display name of the CustomJob. The name can be up to 128 characters long and can consist of any UTF-8 characters.
Required. Job spec.
Output only. The detailed state of the job.
Output only. Time when the CustomJob was created.
Output only. Time when the CustomJob for the first time entered the JOB_STATE_RUNNING
state.
Output only. Time when the CustomJob entered any of the following states: JOB_STATE_SUCCEEDED
, JOB_STATE_FAILED
, JOB_STATE_CANCELLED
.
Output only. Time when the CustomJob was most recently updated.
Output only. Only populated when job's state is JOB_STATE_FAILED
or JOB_STATE_CANCELLED
.
labels
map<string, string>
The labels with user-defined metadata to organize CustomJobs.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information and examples of labels.
Customer-managed encryption key options for a CustomJob. If this is set, then all resources created by the CustomJob will be encrypted with the provided encryption key.
web_access_uris
map<string, string>
Output only. URIs for accessing interactive shells (one URI for each training node). Only available if job_spec.enable_web_access
is true
.
The keys are names of each node in the training job; for example, workerpool0-0
for the primary node, workerpool1-0
for the first node in the second worker pool, and workerpool1-1
for the second node in the second worker pool.
The values are the URIs for each node's interactive shell.
satisfies_pzs
bool
Output only. Reserved for future use.
satisfies_pzi
bool
Output only. Reserved for future use.
CustomJobSpec
Represents the spec of a CustomJob.
persistent_resource_id
string
Optional. The ID of the PersistentResource in the same Project and Location which to run
If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.
Required. The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.
Scheduling options for a CustomJob.
service_account
string
Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.
network
string
Optional. The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC
. Format is of the form projects/{project}/global/networks/{network}
. Where {project} is a project number, as in 12345
, and {network} is a network name.
To specify this field, you must have already configured VPC Network Peering for Vertex AI.
If this field is left unspecified, the job is not peered with any network.
reserved_ip_ranges[]
string
Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job.
If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network.
Example: ['vertex-ai-ip-range'].
The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id
under its parent HyperparameterTuningJob's baseOutputDirectory.
The following Vertex AI environment variables will be passed to containers or python modules when this field is set:
For CustomJob:
- AIP_MODEL_DIR =
<base_output_directory>/model/
- AIP_CHECKPOINT_DIR =
<base_output_directory>/checkpoints/
- AIP_TENSORBOARD_LOG_DIR =
<base_output_directory>/logs/
For CustomJob backing a Trial of HyperparameterTuningJob:
- AIP_MODEL_DIR =
<base_output_directory>/<trial_id>/model/
- AIP_CHECKPOINT_DIR =
<base_output_directory>/<trial_id>/checkpoints/
- AIP_TENSORBOARD_LOG_DIR =
<base_output_directory>/<trial_id>/logs/
protected_artifact_location_id
string
The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations
tensorboard
string
Optional. The name of a Vertex AI Tensorboard
resource to which this CustomJob will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
enable_web_access
bool
Optional. Whether you want Vertex AI to enable interactive shell access to training containers.
If set to true
, you can access interactive shells at the URIs given by CustomJob.web_access_uris
or Trial.web_access_uris
(within HyperparameterTuningJob.trials
).
enable_dashboard_access
bool
Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container.
If set to true
, you can access the dashboard at the URIs given by CustomJob.web_access_uris
or Trial.web_access_uris
(within HyperparameterTuningJob.trials
).
experiment
string
Optional. The Experiment associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}
experiment_run
string
Optional. The Experiment Run associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}
models[]
string
Optional. The name of the Model resources for which to generate a mapping to artifact URIs. Applicable only to some of the Google-provided custom jobs. Format: projects/{project}/locations/{location}/models/{model}
In order to retrieve a specific version of the model, also provide the version ID or version alias. Example: projects/{project}/locations/{location}/models/{model}@2
or projects/{project}/locations/{location}/models/{model}@golden
If no version ID or alias is specified, the "default" version will be returned. The "default" version alias is created for the first version of the model, and can be moved to other versions later on. There will be exactly one default version.
DataItem
A piece of data in a Dataset. Could be an image, a video, a document or plain text.
name
string
Output only. The resource name of the DataItem.
Output only. Timestamp when this DataItem was created.
Output only. Timestamp when this DataItem was last updated.
labels
map<string, string>
Optional. The labels with user-defined metadata to organize your DataItems.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one DataItem(System labels are excluded).
See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable.
Required. The data that the DataItem represents (for example, an image or a text snippet). The schema of the payload is stored in the parent Dataset's metadata schema's
dataItemSchemaUri field.
etag
string
Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
satisfies_pzs
bool
Output only. Reserved for future use.
satisfies_pzi
bool
Output only. Reserved for future use.
DataItemView
A container for a single DataItem and Annotations on it.
The DataItem.
The Annotations on the DataItem. If too many Annotations should be returned for the DataItem, this field will be truncated per annotations_limit in request. If it was, then the has_truncated_annotations will be set to true.
has_truncated_annotations
bool
True if and only if the Annotations field has been truncated. It happens if more Annotations for this DataItem met the request's annotation_filter than are allowed to be returned by annotations_limit. Note that if Annotations field is not being returned due to field mask, then this field will not be set to true no matter how many Annotations are there.
Dataset
A collection of DataItems and Annotations on them.
name
string
Output only. Identifier. The resource name of the Dataset.
display_name
string
Required. The user-defined name of the Dataset. The name can be up to 128 characters long and can consist of any UTF-8 characters.
description
string
The description of the Dataset.
metadata_schema_uri
string
Required. Points to a YAML file stored on Google Cloud Storage describing additional information about the Dataset. The schema is defined as an OpenAPI 3.0.2 Schema Object. The schema files that can be used here are found in gs://google-cloud-aiplatform/schema/dataset/metadata/.
Required. Additional information about the Dataset.
data_item_count
int64
Output only. The number of DataItems in this Dataset. Only apply for non-structured Dataset.
Output only. Timestamp when this Dataset was created.
Output only. Timestamp when this Dataset was last updated.
etag
string
Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
labels
map<string, string>
The labels with user-defined metadata to organize your Datasets.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Dataset (System labels are excluded).
See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. Following system labels exist for each Dataset:
- "aiplatform.googleapis.com/dataset_metadata_schema": output only, its value is the
metadata_schema's
title.
All SavedQueries belong to the Dataset will be returned in List/Get Dataset response. The annotation_specs field will not be populated except for UI cases which will only use annotation_spec_count
. In CreateDataset request, a SavedQuery is created together if this field is set, up to one SavedQuery can be set in CreateDatasetRequest. The SavedQuery should not contain any AnnotationSpec.
Customer-managed encryption key spec for a Dataset. If set, this Dataset and all sub-resources of this Dataset will be secured by this key.
metadata_artifact
string
Output only. The resource name of the Artifact that was created in MetadataStore when creating the Dataset. The Artifact resource name pattern is projects/{project}/locations/{location}/metadataStores/{metadata_store}/artifacts/{artifact}
.
model_reference
string
Optional. Reference to the public base model last used by the dataset. Only set for prompt datasets.
satisfies_pzs
bool
Output only. Reserved for future use.
satisfies_pzi
bool
Output only. Reserved for future use.
DatasetVersion
Describes the dataset version.
name
string
Output only. Identifier. The resource name of the DatasetVersion.
Output only. Timestamp when this DatasetVersion was created.
Output only. Timestamp when this DatasetVersion was last updated.
etag
string
Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
big_query_dataset_name
string
Output only. Name of the associated BigQuery dataset.
display_name
string
The user-defined name of the DatasetVersion. The name can be up to 128 characters long and can consist of any UTF-8 characters.
Required. Output only. Additional information about the DatasetVersion.
model_reference
string
Output only. Reference to the public base model last used by the dataset version. Only set for prompt dataset versions.
satisfies_pzs
bool
Output only. Reserved for future use.
satisfies_pzi
bool
Output only. Reserved for future use.
DedicatedResources
A description of resources that are dedicated to a DeployedModel, and that need a higher degree of manual configuration.
Required. Immutable. The specification of a single machine used by the prediction.
min_replica_count
int32
Required. Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1.
If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
max_replica_count
int32
Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count
as the default value.
The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric.
If machine_spec.accelerator_count
is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics.
If machine_spec.accelerator_count
is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set.
For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name
to aiplatform.googleapis.com/prediction/online/cpu/utilization
and autoscaling_metric_specs.target
to 80
.
spot
bool
Optional. If true, schedule the deployment workload on spot VMs.
DeleteArtifactRequest
Request message for MetadataService.DeleteArtifact
.
name
string
Required. The resource name of the Artifact to delete. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/artifacts/{artifact}
etag
string
Optional. The etag of the Artifact to delete. If this is provided, it must match the server's etag. Otherwise, the request will fail with a FAILED_PRECONDITION.
DeleteBatchPredictionJobRequest
Request message for JobService.DeleteBatchPredictionJob
.
name
string
Required. The name of the BatchPredictionJob resource to be deleted. Format: projects/{project}/locations/{location}/batchPredictionJobs/{batch_prediction_job}
DeleteContextRequest
Request message for MetadataService.DeleteContext
.
name
string
Required. The resource name of the Context to delete. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/contexts/{context}
force
bool
The force deletion semantics is still undefined. Users should not use this field.
etag
string
Optional. The etag of the Context to delete. If this is provided, it must match the server's etag. Otherwise, the request will fail with a FAILED_PRECONDITION.
DeleteCustomJobRequest
Request message for JobService.DeleteCustomJob
.
name
string
Required. The name of the CustomJob resource to be deleted. Format: projects/{project}/locations/{location}/customJobs/{custom_job}
DeleteDatasetRequest
Request message for DatasetService.DeleteDataset
.
name
string
Required. The resource name of the Dataset to delete. Format: projects/{project}/locations/{location}/datasets/{dataset}
DeleteDatasetVersionRequest
Request message for DatasetService.DeleteDatasetVersion
.
name
string
Required. The resource name of the Dataset version to delete. Format: projects/{project}/locations/{location}/datasets/{dataset}/datasetVersions/{dataset_version}
DeleteDeploymentResourcePoolRequest
Request message for DeleteDeploymentResourcePool method.
name
string
Required. The name of the DeploymentResourcePool to delete. Format: projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}
DeleteEndpointRequest
Request message for EndpointService.DeleteEndpoint
.
name
string
Required. The name of the Endpoint resource to be deleted. Format: projects/{project}/locations/{location}/endpoints/{endpoint}
DeleteEntityTypeRequest
Request message for [FeaturestoreService.DeleteEntityTypes][].
name
string
Required. The name of the EntityType to be deleted. Format: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entity_type}
force
bool
If set to true, any Features for this EntityType will also be deleted. (Otherwise, the request will only work if the EntityType has no Features.)
DeleteExecutionRequest
Request message for MetadataService.DeleteExecution
.
name
string
Required. The resource name of the Execution to delete. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/executions/{execution}
etag
string
Optional. The etag of the Execution to delete. If this is provided, it must match the server's etag. Otherwise, the request will fail with a FAILED_PRECONDITION.
DeleteFeatureGroupRequest
Request message for FeatureRegistryService.DeleteFeatureGroup
.
name
string
Required. The name of the FeatureGroup to be deleted. Format: projects/{project}/locations/{location}/featureGroups/{feature_group}
force
bool
If set to true, any Features under this FeatureGroup will also be deleted. (Otherwise, the request will only work if the FeatureGroup has no Features.)
DeleteFeatureOnlineStoreRequest
Request message for FeatureOnlineStoreAdminService.DeleteFeatureOnlineStore
.
name
string
Required. The name of the FeatureOnlineStore to be deleted. Format: projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}
force
bool
If set to true, any FeatureViews and Features for this FeatureOnlineStore will also be deleted. (Otherwise, the request will only work if the FeatureOnlineStore has no FeatureViews.)
DeleteFeatureRequest
Request message for FeaturestoreService.DeleteFeature
. Request message for FeatureRegistryService.DeleteFeature
.
name
string
Required. The name of the Features to be deleted. Format: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entity_type}/features/{feature}
projects/{project}/locations/{location}/featureGroups/{feature_group}/features/{feature}
DeleteFeatureValuesOperationMetadata
Details of operations that delete Feature values.
Operation metadata for Featurestore delete Features values.
DeleteFeatureValuesRequest
Request message for FeaturestoreService.DeleteFeatureValues
.
entity_type
string
Required. The resource name of the EntityType grouping the Features for which values are being deleted from. Format: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entityType}
DeleteOption
. Defines options to select feature values to be deleted. DeleteOption
can be only one of the following:Select feature values to be deleted by specifying entities.
Select feature values to be deleted by specifying time range and features.
SelectEntity
Message to select entity. If an entity id is selected, all the feature values corresponding to the entity id will be deleted, including the entityId.
Required. Selectors choosing feature values of which entity id to be deleted from the EntityType.
SelectTimeRangeAndFeature
Message to select time range and feature. Values of the selected feature generated within an inclusive time range will be deleted. Using this option permanently deletes the feature values from the specified feature IDs within the specified time range. This might include data from the online storage. If you want to retain any deleted historical data in the online storage, you must re-ingest it.
Required. Select feature generated within a half-inclusive time range. The time range is lower inclusive and upper exclusive.
Required. Selectors choosing which feature values to be deleted from the EntityType.
skip_online_storage_delete
bool
If set, data will not be deleted from online storage. When time range is older than the data in online storage, setting this to be true will make the deletion have no impact on online serving.
DeleteFeatureValuesResponse
Response message for FeaturestoreService.DeleteFeatureValues
.
response
. Response based on which delete option is specified in the request response
can be only one of the following:Response for request specifying the entities to delete
Response for request specifying time range and feature
SelectEntity
Response message if the request uses the SelectEntity option.
offline_storage_deleted_entity_row_count
int64
The count of deleted entity rows in the offline storage. Each row corresponds to the combination of an entity ID and a timestamp. One entity ID can have multiple rows in the offline storage.
online_storage_deleted_entity_count
int64
The count of deleted entities in the online storage. Each entity ID corresponds to one entity.
SelectTimeRangeAndFeature
Response message if the request uses the SelectTimeRangeAndFeature option.
impacted_feature_count
int64
The count of the features or columns impacted. This is the same as the feature count in the request.
offline_storage_modified_entity_row_count
int64
The count of modified entity rows in the offline storage. Each row corresponds to the combination of an entity ID and a timestamp. One entity ID can have multiple rows in the offline storage. Within each row, only the features specified in the request are deleted.
online_storage_modified_entity_count
int64
The count of modified entities in the online storage. Each entity ID corresponds to one entity. Within each entity, only the features specified in the request are deleted.
DeleteFeatureViewRequest
Request message for [FeatureOnlineStoreAdminService.DeleteFeatureViews][].
name
string
Required. The name of the FeatureView to be deleted. Format: projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}/featureViews/{feature_view}
DeleteFeaturestoreRequest
Request message for FeaturestoreService.DeleteFeaturestore
.
name
string
Required. The name of the Featurestore to be deleted. Format: projects/{project}/locations/{location}/featurestores/{featurestore}
force
bool
If set to true, any EntityTypes and Features for this Featurestore will also be deleted. (Otherwise, the request will only work if the Featurestore has no EntityTypes.)
DeleteHyperparameterTuningJobRequest
Request message for JobService.DeleteHyperparameterTuningJob
.
name
string
Required. The name of the HyperparameterTuningJob resource to be deleted. Format: projects/{project}/locations/{location}/hyperparameterTuningJobs/{hyperparameter_tuning_job}
DeleteIndexEndpointRequest
Request message for IndexEndpointService.DeleteIndexEndpoint
.
name
string
Required. The name of the IndexEndpoint resource to be deleted. Format: projects/{project}/locations/{location}/indexEndpoints/{index_endpoint}
DeleteIndexRequest
Request message for IndexService.DeleteIndex
.
name
string
Required. The name of the Index resource to be deleted. Format: projects/{project}/locations/{location}/indexes/{index}
DeleteMetadataStoreOperationMetadata
Details of operations that perform MetadataService.DeleteMetadataStore
.
Operation metadata for deleting a MetadataStore.
DeleteMetadataStoreRequest
Request message for MetadataService.DeleteMetadataStore
.
name
string
Required. The resource name of the MetadataStore to delete. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}
force
(deprecated)
bool
Deprecated: Field is no longer supported.
DeleteModelDeploymentMonitoringJobRequest
Request message for JobService.DeleteModelDeploymentMonitoringJob
.
name
string
Required. The resource name of the model monitoring job to delete. Format: projects/{project}/locations/{location}/modelDeploymentMonitoringJobs/{model_deployment_monitoring_job}
DeleteModelRequest
Request message for ModelService.DeleteModel
.
name
string
Required. The name of the Model resource to be deleted. Format: projects/{project}/locations/{location}/models/{model}
DeleteModelVersionRequest
Request message for ModelService.DeleteModelVersion
.
name
string
Required. The name of the model version to be deleted, with a version ID explicitly included.
Example: projects/{project}/locations/{location}/models/{model}@1234
DeleteNasJobRequest
Request message for JobService.DeleteNasJob
.
name
string
Required. The name of the NasJob resource to be deleted. Format: projects/{project}/locations/{location}/nasJobs/{nas_job}
DeleteNotebookExecutionJobRequest
Request message for [NotebookService.DeleteNotebookExecutionJob]
name
string
Required. The name of the NotebookExecutionJob resource to be deleted.
DeleteNotebookRuntimeRequest
Request message for NotebookService.DeleteNotebookRuntime
.
name
string
Required. The name of the NotebookRuntime resource to be deleted. Instead of checking whether the name is in valid NotebookRuntime resource name format, directly throw NotFound exception if there is no such NotebookRuntime in spanner.
DeleteNotebookRuntimeTemplateRequest
Request message for NotebookService.DeleteNotebookRuntimeTemplate
.
name
string
Required. The name of the NotebookRuntimeTemplate resource to be deleted. Format: projects/{project}/locations/{location}/notebookRuntimeTemplates/{notebook_runtime_template}
DeleteOperationMetadata
Details of operations that perform deletes of any entities.
The common part of the operation metadata.
DeletePersistentResourceRequest
Request message for PersistentResourceService.DeletePersistentResource
.
name
string
Required. The name of the PersistentResource to be deleted. Format: projects/{project}/locations/{location}/persistentResources/{persistent_resource}
DeletePipelineJobRequest
Request message for PipelineService.DeletePipelineJob
.
name
string
Required. The name of the PipelineJob resource to be deleted. Format: projects/{project}/locations/{location}/pipelineJobs/{pipeline_job}
DeleteSavedQueryRequest
Request message for DatasetService.DeleteSavedQuery
.
name
string
Required. The resource name of the SavedQuery to delete. Format: projects/{project}/locations/{location}/datasets/{dataset}/savedQueries/{saved_query}
DeleteScheduleRequest
Request message for ScheduleService.DeleteSchedule
.
name
string
Required. The name of the Schedule resource to be deleted. Format: projects/{project}/locations/{location}/schedules/{schedule}
DeleteSpecialistPoolRequest
Request message for SpecialistPoolService.DeleteSpecialistPool
.
name
string
Required. The resource name of the SpecialistPool to delete. Format: projects/{project}/locations/{location}/specialistPools/{specialist_pool}
force
bool
If set to true, any specialist managers in this SpecialistPool will also be deleted. (Otherwise, the request will only work if the SpecialistPool has no specialist managers.)
DeleteStudyRequest
Request message for VizierService.DeleteStudy
.
name
string
Required. The name of the Study resource to be deleted. Format: projects/{project}/locations/{location}/studies/{study}
DeleteTensorboardExperimentRequest
Request message for TensorboardService.DeleteTensorboardExperiment
.
name
string
Required. The name of the TensorboardExperiment to be deleted. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}
DeleteTensorboardRequest
Request message for TensorboardService.DeleteTensorboard
.
name
string
Required. The name of the Tensorboard to be deleted. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
DeleteTensorboardRunRequest
Request message for TensorboardService.DeleteTensorboardRun
.
name
string
Required. The name of the TensorboardRun to be deleted. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}/runs/{run}
DeleteTensorboardTimeSeriesRequest
Request message for TensorboardService.DeleteTensorboardTimeSeries
.
name
string
Required. The name of the TensorboardTimeSeries to be deleted. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}/runs/{run}/timeSeries/{time_series}
DeleteTrainingPipelineRequest
Request message for PipelineService.DeleteTrainingPipeline
.
name
string
Required. The name of the TrainingPipeline resource to be deleted. Format: projects/{project}/locations/{location}/trainingPipelines/{training_pipeline}
DeleteTrialRequest
Request message for VizierService.DeleteTrial
.
name
string
Required. The Trial's name. Format: projects/{project}/locations/{location}/studies/{study}/trials/{trial}
DeployIndexOperationMetadata
Runtime operation information for IndexEndpointService.DeployIndex
.
The operation generic information.
deployed_index_id
string
The unique index id specified by user
DeployIndexRequest
Request message for IndexEndpointService.DeployIndex
.
index_endpoint
string
Required. The name of the IndexEndpoint resource into which to deploy an Index. Format: projects/{project}/locations/{location}/indexEndpoints/{index_endpoint}
Required. The DeployedIndex to be created within the IndexEndpoint.
DeployIndexResponse
Response message for IndexEndpointService.DeployIndex
.
The DeployedIndex that had been deployed in the IndexEndpoint.
DeployModelOperationMetadata
Runtime operation information for EndpointService.DeployModel
.
The operation generic information.
DeployModelRequest
Request message for EndpointService.DeployModel
.
endpoint
string
Required. The name of the Endpoint resource into which to deploy a Model. Format: projects/{project}/locations/{location}/endpoints/{endpoint}
Required. The DeployedModel to be created within the Endpoint. Note that Endpoint.traffic_split
must be updated for the DeployedModel to start receiving traffic, either as part of this call, or via EndpointService.UpdateEndpoint
.
traffic_split
map<string, int32>
A map from a DeployedModel's ID to the percentage of this Endpoint's traffic that should be forwarded to that DeployedModel.
If this field is non-empty, then the Endpoint's traffic_split
will be overwritten with it. To refer to the ID of the just being deployed Model, a "0" should be used, and the actual ID of the new DeployedModel will be filled in its place by this method. The traffic percentage values must add up to 100.
If this field is empty, then the Endpoint's traffic_split
is not updated.
DeployModelResponse
Response message for EndpointService.DeployModel
.
The DeployedModel that had been deployed in the Endpoint.
DeployedIndex
A deployment of an Index. IndexEndpoints contain one or more DeployedIndexes.
id
string
Required. The user specified ID of the DeployedIndex. The ID can be up to 128 characters long and must start with a letter and only contain letters, numbers, and underscores. The ID must be unique within the project it is created in.
index
string
Required. The name of the Index this is the deployment of. We may refer to this Index as the DeployedIndex's "original" Index.
display_name
string
The display name of the DeployedIndex. If not provided upon creation, the Index's display_name is used.
Output only. Timestamp when the DeployedIndex was created.
Output only. Provides paths for users to send requests directly to the deployed index services running on Cloud via private services access. This field is populated if network
is configured.
Output only. The DeployedIndex may depend on various data on its original Index. Additionally when certain changes to the original Index are being done (e.g. when what the Index contains is being changed) the DeployedIndex may be asynchronously updated in the background to reflect these changes. If this timestamp's value is at least the Index.update_time
of the original Index, it means that this DeployedIndex and the original Index are in sync. If this timestamp is older, then to see which updates this DeployedIndex already contains (and which it does not), one must list
the operations that are running on the original Index. Only the successfully completed Operations with update_time
equal or before this sync time are contained in this DeployedIndex.
Optional. A description of resources that the DeployedIndex uses, which to large degree are decided by Vertex AI, and optionally allows only a modest additional configuration. If min_replica_count is not set, the default value is 2 (we don't provide SLA when min_replica_count=1). If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000.
Optional. A description of resources that are dedicated to the DeployedIndex, and that need a higher degree of manual configuration. The field min_replica_count must be set to a value strictly greater than 0, or else validation will fail. We don't provide SLA when min_replica_count=1. If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000.
Available machine types for SMALL shard: e2-standard-2 and all machine types available for MEDIUM and LARGE shard.
Available machine types for MEDIUM shard: e2-standard-16 and all machine types available for LARGE shard.
Available machine types for LARGE shard: e2-highmem-16, n2d-standard-32.
n1-standard-16 and n1-standard-32 are still available, but we recommend e2-standard-16 and e2-highmem-16 for cost efficiency.
enable_access_logging
bool
Optional. If true, private endpoint's access logs are sent to Cloud Logging.
These logs are like standard server access logs, containing information like timestamp and latency for each MatchRequest.
Note that logs may incur a cost, especially if the deployed index receives a high queries per second rate (QPS). Estimate your costs before enabling this option.
Optional. If set, the authentication is enabled for the private endpoint.
reserved_ip_ranges[]
string
Optional. A list of reserved ip ranges under the VPC network that can be used for this DeployedIndex.
If set, we will deploy the index within the provided ip ranges. Otherwise, the index might be deployed to any ip ranges under the provided VPC network.
The value should be the name of the address (https://cloud.google.com/compute/docs/reference/rest/v1/addresses) Example: ['vertex-ai-ip-range'].
For more information about subnets and network IP ranges, please see https://cloud.google.com/vpc/docs/subnets#manually_created_subnet_ip_ranges.
deployment_group
string
Optional. The deployment group can be no longer than 64 characters (eg: 'test', 'prod'). If not set, we will use the 'default' deployment group.
Creating deployment_groups
with reserved_ip_ranges
is a recommended practice when the peered network has multiple peering ranges. This creates your deployments from predictable IP spaces for easier traffic administration. Also, one deployment_group (except 'default') can only be used with the same reserved_ip_ranges which means if the deployment_group has been used with reserved_ip_ranges: [a, b, c], using it with [a, b] or [d, e] is disallowed.
Note: we only support up to 5 deployment groups(not including 'default').
Optional. If set for PSC deployed index, PSC connection will be automatically created after deployment is done and the endpoint information is populated in private_endpoints.psc_automated_endpoints.
DeployedIndexAuthConfig
Used to set up the auth on the DeployedIndex's private endpoint.
Defines the authentication provider that the DeployedIndex uses.
AuthProvider
Configuration for an authentication provider, including support for JSON Web Token (JWT).
audiences[]
string
The list of JWT audiences. that are allowed to access. A JWT containing any of these audiences will be accepted.
allowed_issuers[]
string
A list of allowed JWT issuers. Each entry must be a valid Google service account, in the following format:
service-account-name@project-id.iam.gserviceaccount.com
DeployedIndexRef
Points to a DeployedIndex.
index_endpoint
string
Immutable. A resource name of the IndexEndpoint.
deployed_index_id
string
Immutable. The ID of the DeployedIndex in the above IndexEndpoint.
display_name
string
Output only. The display name of the DeployedIndex.
DeployedModel
A deployment of a Model. Endpoints contain one or more DeployedModels.
id
string
Immutable. The ID of the DeployedModel. If not provided upon deployment, Vertex AI will generate a value for this ID.
This value should be 1-10 characters, and valid characters are /[0-9]/
.
model
string
Required. The resource name of the Model that this is the deployment of. Note that the Model may be in a different location than the DeployedModel's Endpoint.
The resource name may contain version id or version alias to specify the version. Example: projects/{project}/locations/{location}/models/{model}@2
or projects/{project}/locations/{location}/models/{model}@golden
if no version is specified, the default version will be deployed.
model_version_id
string
Output only. The version ID of the model that is deployed.
display_name
string
The display name of the DeployedModel. If not provided upon creation, the Model's display_name is used.
Output only. Timestamp when the DeployedModel was created.
Explanation configuration for this DeployedModel.
When deploying a Model using EndpointService.DeployModel
, this value overrides the value of Model.explanation_spec
. All fields of explanation_spec
are optional in the request. If a field of explanation_spec
is not populated, the value of the same field of Model.explanation_spec
is inherited. If the corresponding Model.explanation_spec
is not populated, all fields of the explanation_spec
will be used for the explanation configuration.
disable_explanations
bool
If true, deploy the model without explainable feature, regardless the existence of Model.explanation_spec
or explanation_spec
.
service_account
string
The service account that the DeployedModel's container runs as. Specify the email address of the service account. If this service account is not specified, the container runs as a service account that doesn't have access to the resource project.
Users deploying the Model must have the iam.serviceAccounts.actAs
permission on this service account.
disable_container_logging
bool
For custom-trained Models and AutoML Tabular Models, the container of the DeployedModel instances will send stderr
and stdout
streams to Cloud Logging by default. Please note that the logs incur cost, which are subject to Cloud Logging pricing.
User can disable container logging by setting this flag to true.
enable_access_logging
bool
If true, online prediction access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each prediction request.
Note that logs may incur a cost, especially if your project receives prediction requests at a high queries per second rate (QPS). Estimate your costs before enabling this option.
Output only. Provide paths for users to send predict/explain/health requests directly to the deployed model services running on Cloud via private services access. This field is populated if network
is configured.
prediction_resources
. The prediction (for example, the machine) resources that the DeployedModel uses. The user is billed for the resources (at least their minimal amount) even if the DeployedModel receives no traffic. Not all Models support all resources types. See Model.supported_deployment_resources_types
. Required except for Large Model Deploy use cases. prediction_resources
can be only one of the following:A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration.
A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration.
DeployedModelRef
Points to a DeployedModel.
endpoint
string
Immutable. A resource name of an Endpoint.
deployed_model_id
string
Immutable. An ID of a DeployedModel in the above Endpoint.
DeploymentResourcePool
A description of resources that can be shared by multiple DeployedModels, whose underlying specification consists of a DedicatedResources.
name
string
Immutable. The resource name of the DeploymentResourcePool. Format: projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}
Required. The underlying DedicatedResources that the DeploymentResourcePool uses.
Customer-managed encryption key spec for a DeploymentResourcePool. If set, this DeploymentResourcePool will be secured by this key. Endpoints and the DeploymentResourcePool they deploy in need to have the same EncryptionSpec.
service_account
string
The service account that the DeploymentResourcePool's container(s) run as. Specify the email address of the service account. If this service account is not specified, the container(s) run as a service account that doesn't have access to the resource project.
Users deploying the Models to this DeploymentResourcePool must have the iam.serviceAccounts.actAs
permission on this service account.
disable_container_logging
bool
If the DeploymentResourcePool is deployed with custom-trained Models or AutoML Tabular Models, the container(s) of the DeploymentResourcePool will send stderr
and stdout
streams to Cloud Logging by default. Please note that the logs incur cost, which are subject to Cloud Logging pricing.
User can disable container logging by setting this flag to true.
Output only. Timestamp when this DeploymentResourcePool was created.
satisfies_pzs
bool
Output only. Reserved for future use.
satisfies_pzi
bool
Output only. Reserved for future use.
DestinationFeatureSetting
feature_id
string
Required. The ID of the Feature to apply the setting to.
destination_field
string
Specify the field name in the export destination. If not specified, Feature ID is used.
DirectPredictRequest
Request message for PredictionService.DirectPredict
.
endpoint
string
Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}
The prediction input.
The parameters that govern the prediction.
DirectPredictResponse
Response message for PredictionService.DirectPredict
.
The prediction output.
The parameters that govern the prediction.
DirectRawPredictRequest
Request message for PredictionService.DirectRawPredict
.
endpoint
string
Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}
method_name
string
Fully qualified name of the API method being invoked to perform predictions.
Format: /namespace.Service/Method/
Example: /tensorflow.serving.PredictionService/Predict
input
bytes
The prediction input.
DirectRawPredictResponse
Response message for PredictionService.DirectRawPredict
.
output
bytes
The prediction output.
DiskSpec
Represents the spec of disk options.
boot_disk_type
string
Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
boot_disk_size_gb
int32
Size in GB of the boot disk (default is 100GB).
DoubleArray
A list of double values.
values[]
double
A list of double values.
DynamicRetrievalConfig
Describes the options to customize dynamic retrieval.
The mode of the predictor to be used in dynamic retrieval.
dynamic_threshold
float
Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
Mode
The mode of the predictor to be used in dynamic retrieval.
Enums | |
---|---|
MODE_UNSPECIFIED |
Always trigger retrieval. |
MODE_DYNAMIC |
Run retrieval only when system decides it is necessary. |
EncryptionSpec
Represents a customer-managed encryption key spec that can be applied to a top-level resource.
kms_key_name
string
Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
Endpoint
Models are deployed into it, and afterwards Endpoint is called to obtain predictions and explanations.
name
string
Output only. The resource name of the Endpoint.
display_name
string
Required. The display name of the Endpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters.
description
string
The description of the Endpoint.
Output only. The models deployed in this Endpoint. To add or remove DeployedModels use EndpointService.DeployModel
and EndpointService.UndeployModel
respectively.
traffic_split
map<string, int32>
A map from a DeployedModel's ID to the percentage of this Endpoint's traffic that should be forwarded to that DeployedModel.
If a DeployedModel's ID is not listed in this map, then it receives no traffic.
The traffic percentage values must add up to 100, or map must be empty if the Endpoint is to not accept any traffic at a moment.
etag
string
Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
labels
map<string, string>
The labels with user-defined metadata to organize your Endpoints.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information and examples of labels.
Output only. Timestamp when this Endpoint was created.
Output only. Timestamp when this Endpoint was last updated.
Customer-managed encryption key spec for an Endpoint. If set, this Endpoint and all sub-resources of this Endpoint will be secured by this key.
network
string
Optional. The full name of the Google Compute Engine network to which the Endpoint should be peered.
Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network.
Only one of the fields, network
or enable_private_service_connect
, can be set.
Format: projects/{project}/global/networks/{network}
. Where {project}
is a project number, as in 12345
, and {network}
is network name.
enable_private_service_connect
(deprecated)
bool
Deprecated: If true, expose the Endpoint via private service connect.
Only one of the fields, network
or enable_private_service_connect
, can be set.
Optional. Configuration for private service connect.
network
and private_service_connect_config
are mutually exclusive.
model_deployment_monitoring_job
string
Output only. Resource name of the Model Monitoring job associated with this Endpoint if monitoring is enabled by JobService.CreateModelDeploymentMonitoringJob
. Format: projects/{project}/locations/{location}/modelDeploymentMonitoringJobs/{model_deployment_monitoring_job}
Configures the request-response logging for online prediction.
dedicated_endpoint_enabled
bool
If true, the endpoint will be exposed through a dedicated DNS [Endpoint.dedicated_endpoint_dns]. Your request to the dedicated DNS will be isolated from other users' traffic and will have better performance and reliability. Note: Once you enabled dedicated endpoint, you won't be able to send request to the shared DNS {region}-aiplatform.googleapis.com. The limitation will be removed soon.
dedicated_endpoint_dns
string
Output only. DNS of the dedicated endpoint. Will only be populated if dedicated_endpoint_enabled is true. Format: https://{endpoint_id}.{region}-{project_number}.prediction.vertexai.goog
.
satisfies_pzs
bool
Output only. Reserved for future use.
satisfies_pzi
bool
Output only. Reserved for future use.
EntityIdSelector
Selector for entityId. Getting ids from the given source.
entity_id_field
string
Source column that holds entity IDs. If not provided, entity IDs are extracted from the column named entity_id.
EntityIdsSource
. Details about the source data, including the location of the storage and the format. EntityIdsSource
can be only one of the following:Source of Csv
EntityType
An entity type is a type of object in a system that needs to be modeled and have stored information about. For example, driver is an entity type, and driver0 is an instance of an entity type driver.
name
string
Immutable. Name of the EntityType. Format: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entity_type}
The last part entity_type is assigned by the client. The entity_type can be up to 64 characters long and can consist only of ASCII Latin letters A-Z and a-z and underscore(_), and ASCII digits 0-9 starting with a letter. The value will be unique given a featurestore.
description
string
Optional. Description of the EntityType.
Output only. Timestamp when this EntityType was created.
Output only. Timestamp when this EntityType was most recently updated.
labels
map<string, string>
Optional. The labels with user-defined metadata to organize your EntityTypes.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information on and examples of labels. No more than 64 user labels can be associated with one EntityType (System labels are excluded)." System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable.
etag
string
Optional. Used to perform a consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
Optional. The default monitoring configuration for all Features with value type (Feature.ValueType
) BOOL, STRING, DOUBLE or INT64 under this EntityType.
If this is populated with [FeaturestoreMonitoringConfig.monitoring_interval] specified, snapshot analysis monitoring is enabled. Otherwise, snapshot analysis monitoring is disabled.
offline_storage_ttl_days
int32
Optional. Config for data retention policy in offline storage. TTL in days for feature values that will be stored in offline storage. The Feature Store offline storage periodically removes obsolete feature values older than offline_storage_ttl_days
since the feature generation time. If unset (or explicitly set to 0), default to 4000 days TTL.
satisfies_pzs
bool
Output only. Reserved for future use.
satisfies_pzi
bool
Output only. Reserved for future use.
EnvVar
Represents an environment variable present in a Container or Python Module.
name
string
Required. Name of the environment variable. Must be a valid C identifier.
value
string
Required. Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
ErrorAnalysisAnnotation
Model error analysis for each annotation.
Attributed items for a given annotation, typically representing neighbors from the training sets constrained by the query type.
The query type used for finding the attributed items.
outlier_score
double
The outlier score of this annotated item. Usually defined as the min of all distances from attributed items.
outlier_threshold
double
The threshold used to determine if this annotation is an outlier or not.
AttributedItem
Attributed items for a given annotation, typically representing neighbors from the training sets constrained by the query type.
annotation_resource_name
string
The unique ID for each annotation. Used by FE to allocate the annotation in DB.
distance
double
The distance of this item to the annotation.
QueryType
The query type used for finding the attributed items.
Enums | |
---|---|
QUERY_TYPE_UNSPECIFIED |
Unspecified query type for model error analysis. |
ALL_SIMILAR |
Query similar samples across all classes in the dataset. |
SAME_CLASS_SIMILAR |
Query similar samples from the same class of the input sample. |
SAME_CLASS_DISSIMILAR |
Query dissimilar samples from the same class of the input sample. |
EvaluateInstancesRequest
Request message for EvaluationService.EvaluateInstances.
location
string
Required. The resource name of the Location to evaluate the instances. Format: projects/{project}/locations/{location}
metric_inputs
. Instances and specs for evaluation metric_inputs
can be only one of the following:Auto metric instances. Instances and metric spec for exact match metric.
Instances and metric spec for bleu metric.
Instances and metric spec for rouge metric.
LLM-based metric instance. General text generation metrics, applicable to other categories. Input for fluency metric.
Input for coherence metric.
Input for safety metric.
Input for groundedness metric.
Input for fulfillment metric.
Input for summarization quality metric.
Input for pairwise summarization quality metric.
Input for summarization helpfulness metric.
Input for summarization verbosity metric.
Input for question answering quality metric.
Input for pairwise question answering quality metric.
Input for question answering relevance metric.
Input for question answering helpfulness metric.
Input for question answering correctness metric.
Tool call metric instances. Input for tool call valid metric.
Input for tool name match metric.
Input for tool parameter key match metric.
Input for tool parameter key value match metric.
EvaluateInstancesResponse
Response message for EvaluationService.EvaluateInstances.
evaluation_results
. Evaluation results will be served in the same order as presented in EvaluationRequest.instances. evaluation_results
can be only one of the following:Auto metric evaluation results. Results for exact match metric.
Results for bleu metric.
Results for rouge metric.
LLM-based metric evaluation result. General text generation metrics, applicable to other categories. Result for fluency metric.
Result for coherence metric.
Result for safety metric.
Result for groundedness metric.
Result for fulfillment metric.
Summarization only metrics. Result for summarization quality metric.
Result for pairwise summarization quality metric.
Result for summarization helpfulness metric.
Result for summarization verbosity metric.
Question answering only metrics. Result for question answering quality metric.
Result for pairwise question answering quality metric.
Result for question answering relevance metric.
Result for question answering helpfulness metric.
Result for question answering correctness metric.
Tool call metrics. Results for tool call valid metric.
Results for tool name match metric.
Results for tool parameter key match metric.
Results for tool parameter key value match metric.
EvaluatedAnnotation
True positive, false positive, or false negative.
EvaluatedAnnotation is only available under ModelEvaluationSlice with slice of annotationSpec
dimension.
Output only. Type of the EvaluatedAnnotation.
Output only. The model predicted annotations.
For true positive, there is one and only one prediction, which matches the only one ground truth annotation in ground_truths
.
For false positive, there is one and only one prediction, which doesn't match any ground truth annotation of the corresponding [data_item_view_id][EvaluatedAnnotation.data_item_view_id].
For false negative, there are zero or more predictions which are similar to the only ground truth annotation in ground_truths
but not enough for a match.
The schema of the prediction is stored in ModelEvaluation.annotation_schema_uri
Output only. The ground truth Annotations, i.e. the Annotations that exist in the test data the Model is evaluated on.
For true positive, there is one and only one ground truth annotation, which matches the only prediction in predictions
.
For false positive, there are zero or more ground truth annotations that are similar to the only prediction in predictions
, but not enough for a match.
For false negative, there is one and only one ground truth annotation, which doesn't match any predictions created by the model.
The schema of the ground truth is stored in ModelEvaluation.annotation_schema_uri
Output only. The data item payload that the Model predicted this EvaluatedAnnotation on.
evaluated_data_item_view_id
string
Output only. ID of the EvaluatedDataItemView under the same ancestor ModelEvaluation. The EvaluatedDataItemView consists of all ground truths and predictions on data_item_payload
.
Explanations of predictions
. Each element of the explanations indicates the explanation for one explanation Method.
The attributions list in the EvaluatedAnnotationExplanation.explanation
object corresponds to the predictions
list. For example, the second element in the attributions list explains the second element in the predictions list.
Annotations of model error analysis results.
EvaluatedAnnotationType
Describes the type of the EvaluatedAnnotation. The type is determined
Enums | |
---|---|
EVALUATED_ANNOTATION_TYPE_UNSPECIFIED |
Invalid value. |
TRUE_POSITIVE |
The EvaluatedAnnotation is a true positive. It has a prediction created by the Model and a ground truth Annotation which the prediction matches. |
FALSE_POSITIVE |
The EvaluatedAnnotation is false positive. It has a prediction created by the Model which does not match any ground truth annotation. |
FALSE_NEGATIVE |
The EvaluatedAnnotation is false negative. It has a ground truth annotation which is not matched by any of the model created predictions. |
EvaluatedAnnotationExplanation
Explanation result of the prediction produced by the Model.
explanation_type
string
Explanation type.
For AutoML Image Classification models, possible values are:
image-integrated-gradients
image-xrai
Explanation attribution response details.
Event
An edge describing the relationship between an Artifact and an Execution in a lineage graph.
artifact
string
Required. The relative resource name of the Artifact in the Event.
execution
string
Output only. The relative resource name of the Execution in the Event.
Output only. Time the Event occurred.
Required. The type of the Event.
labels
map<string, string>
The labels with user-defined metadata to annotate Events.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Event (System labels are excluded).
See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable.
Type
Describes whether an Event's Artifact is the Execution's input or output.
Enums | |
---|---|
TYPE_UNSPECIFIED |
Unspecified whether input or output of the Execution. |
INPUT |
An input of the Execution. |
OUTPUT |
An output of the Execution. |
ExactMatchInput
Input for exact match metric.
Required. Spec for exact match metric.
Required. Repeated exact match instances.
ExactMatchInstance
Spec for exact match instance.
prediction
string
Required. Output of the evaluated model.
reference
string
Required. Ground truth used to compare against the prediction.
ExactMatchMetricValue
Exact match metric value for an instance.
score
float
Output only. Exact match score.
ExactMatchResults
Results for exact match metric.
Output only. Exact match metric values.
ExactMatchSpec
This type has no fields.
Spec for exact match metric - returns 1 if prediction and reference exactly matches, otherwise 0.
Examples
Example-based explainability that returns the nearest neighbors from the provided dataset.
neighbor_count
int32
The number of neighbors to return when querying for examples.
Union field source
.
source
can be only one of the following:
The Cloud Storage input instances.
Union field config
.
config
can be only one of the following:
The full configuration for the generated index, the semantics are the same as metadata
and should match NearestNeighborSearchConfig.
Simplified preset configuration, which automatically sets configuration values based on the desired query speed-precision trade-off and modality.
ExampleGcsSource
The Cloud Storage input instances.
The format in which instances are given, if not specified, assume it's JSONL format. Currently only JSONL format is supported.
The Cloud Storage location for the input instances.
DataFormat
The format of the input example instances.
Enums | |
---|---|
DATA_FORMAT_UNSPECIFIED |
Format unspecified, used when unset. |
JSONL |
Examples are stored in JSONL files. |
ExamplesOverride
Overrides for example-based explanations.
neighbor_count
int32
The number of neighbors to return.
crowding_count
int32
The number of neighbors to return that have the same crowding tag.
Restrict the resulting nearest neighbors to respect these constraints.
return_embeddings
bool
If true, return the embeddings instead of neighbors.
The format of the data being provided with each call.
DataFormat
Data format enum.
Enums | |
---|---|
DATA_FORMAT_UNSPECIFIED |
Unspecified format. Must not be used. |
INSTANCES |
Provided data is a set of model inputs. |
EMBEDDINGS |
Provided data is a set of embeddings. |
ExamplesRestrictionsNamespace
Restrictions namespace for example-based explanations overrides.
namespace_name
string
The namespace name.
allow[]
string
The list of allowed tags.
deny[]
string
The list of deny tags.
Execution
Instance of a general execution.
name
string
Output only. The resource name of the Execution.
display_name
string
User provided display name of the Execution. May be up to 128 Unicode characters.
The state of this Execution. This is a property of the Execution, and does not imply or capture any ongoing process. This property is managed by clients (such as Vertex AI Pipelines) and the system does not prescribe or check the validity of state transitions.
etag
string
An eTag used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
labels
map<string, string>
The labels with user-defined metadata to organize your Executions.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Execution (System labels are excluded).
Output only. Timestamp when this Execution was created.
Output only. Timestamp when this Execution was last updated.
schema_title
string
The title of the schema describing the metadata.
Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.
schema_version
string
The version of the schema in schema_title
to use.
Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.
Properties of the Execution. Top level metadata keys' heading and trailing spaces will be trimmed. The size of this field should not exceed 200KB.
description
string
Description of the Execution
State
Describes the state of the Execution.
Enums | |
---|---|
STATE_UNSPECIFIED |
Unspecified Execution state |
NEW |
The Execution is new |
RUNNING |
The Execution is running |
COMPLETE |
The Execution has finished running |
FAILED |
The Execution has failed |
CACHED |
The Execution completed through Cache hit. |
CANCELLED |
The Execution was cancelled. |
ExplainRequest
Request message for PredictionService.Explain
.
endpoint
string
Required. The name of the Endpoint requested to serve the explanation. Format: projects/{project}/locations/{location}/endpoints/{endpoint}
Required. The instances that are the input to the explanation call. A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the explanation call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' Model's
PredictSchemata's
instance_schema_uri
.
The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' Model's
PredictSchemata's
parameters_schema_uri
.
If specified, overrides the explanation_spec
of the DeployedModel. Can be used for explaining prediction results with different configurations, such as: - Explaining top-5 predictions results as opposed to top-1; - Increasing path count or step count of the attribution methods to reduce approximate errors; - Using different baselines for explaining the prediction results.
deployed_model_id
string
If specified, this ExplainRequest will be served by the chosen DeployedModel, overriding Endpoint.traffic_split
.
ExplainResponse
Response message for PredictionService.Explain
.
The explanations of the Model's PredictResponse.predictions
.
It has the same number of elements as instances
to be explained.
deployed_model_id
string
ID of the Endpoint's DeployedModel that served this explanation.
The predictions that are the output of the predictions call. Same as PredictResponse.predictions
.
Explanation
Explanation of a prediction (provided in PredictResponse.predictions
) produced by the Model on a given instance
.
Output only. Feature attributions grouped by predicted outputs.
For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index
can be used to identify which output this attribution is explaining.
By default, we provide Shapley values for the predicted class. However, you can configure the explanation request to generate Shapley values for any other classes too. For example, if a model predicts a probability of 0.4
for approving a loan application, the model's decision is to reject the application since p(reject) = 0.6 > p(approve) = 0.4
, and the default Shapley values would be computed for rejection decision and not approval, even though the latter might be the positive class.
If users set ExplanationParameters.top_k
, the attributions are sorted by [instance_output_value][Attributions.instance_output_value] in descending order. If ExplanationParameters.output_indices
is specified, the attributions are stored by Attribution.output_index
in the same order as they appear in the output_indices.
Output only. List of the nearest neighbors for example-based explanations.
For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.
ExplanationMetadata
Metadata describing the Model's input and output for explanation.
Required. Map from feature names to feature input metadata. Keys are the name of the features. Values are the specification of the feature.
An empty InputMetadata is valid. It describes a text feature which has the name specified as the key in ExplanationMetadata.inputs
. The baseline of the empty feature is chosen by Vertex AI.
For Vertex AI-provided Tensorflow images, the key can be any friendly name of the feature. Once specified, featureAttributions
are keyed by this key (if not grouped with another feature).
For custom images, the key must match with the key in instance
.
Required. Map from output names to output metadata.
For Vertex AI-provided Tensorflow images, keys can be any user defined string that consists of any UTF-8 characters.
For custom images, keys are the name of the output field in the prediction to be explained.
Currently only one key is allowed.
feature_attributions_schema_uri
string
Points to a YAML file stored on Google Cloud Storage describing the format of the feature attributions
. The schema is defined as an OpenAPI 3.0.2 Schema Object. AutoML tabular Models always have this field populated by Vertex AI. Note: The URI given on output may be different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
latent_space_source
string
Name of the source to generate embeddings for example based explanations.
InputMetadata
Metadata of the input of a feature.
Fields other than InputMetadata.input_baselines
are applicable only for Models that are using Vertex AI-provided images for Tensorflow.
Baseline inputs for this feature.
If no baseline is specified, Vertex AI chooses the baseline for this feature. If multiple baselines are specified, Vertex AI returns the average attributions across them in Attribution.feature_attributions
.
For Vertex AI-provided Tensorflow images (both 1.x and 2.x), the shape of each baseline must match the shape of the input tensor. If a scalar is provided, we broadcast to the same shape as the input tensor.
For custom images, the element of the baselines must be in the same format as the feature's input in the instance
[]. The schema of any single instance may be specified via Endpoint's DeployedModels' Model's
PredictSchemata's
instance_schema_uri
.
input_tensor_name
string
Name of the input tensor for this feature. Required and is only applicable to Vertex AI-provided images for Tensorflow.
Defines how the feature is encoded into the input tensor. Defaults to IDENTITY.
modality
string
Modality of the feature. Valid values are: numeric, image. Defaults to numeric.
The domain details of the input feature value. Like min/max, original mean or standard deviation if normalized.
indices_tensor_name
string
Specifies the index of the values of the input tensor. Required when the input tensor is a sparse representation. Refer to Tensorflow documentation for more details: https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor.
dense_shape_tensor_name
string
Specifies the shape of the values of the input if the input is a sparse representation. Refer to Tensorflow documentation for more details: https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor.
index_feature_mapping[]
string
A list of feature names for each index in the input tensor. Required when the input InputMetadata.encoding
is BAG_OF_FEATURES, BAG_OF_FEATURES_SPARSE, INDICATOR.
encoded_tensor_name
string
Encoded tensor is a transformation of the input tensor. Must be provided if choosing Integrated Gradients attribution
or XRAI attribution
and the input tensor is not differentiable.
An encoded tensor is generated if the input tensor is encoded by a lookup table.
A list of baselines for the encoded tensor.
The shape of each baseline should match the shape of the encoded tensor. If a scalar is provided, Vertex AI broadcasts to the same shape as the encoded tensor.
Visualization configurations for image explanation.
group_name
string
Name of the group that the input belongs to. Features with the same group name will be treated as one feature when computing attributions. Features grouped together can have different shapes in value. If provided, there will be one single attribution generated in Attribution.feature_attributions
, keyed by the group name.
Encoding
Defines how a feature is encoded. Defaults to IDENTITY.
Enums | |
---|---|
ENCODING_UNSPECIFIED |
Default value. This is the same as IDENTITY. |
IDENTITY |
The tensor represents one feature. |
BAG_OF_FEATURES |
The tensor represents a bag of features where each index maps to a feature.
|
BAG_OF_FEATURES_SPARSE |
The tensor represents a bag of features where each index maps to a feature. Zero values in the tensor indicates feature being non-existent.
|
INDICATOR |
The tensor is a list of binaries representing whether a feature exists or not (1 indicates existence).
|
COMBINED_EMBEDDING |
The tensor is encoded into a 1-dimensional array represented by an encoded tensor.
|
CONCAT_EMBEDDING |
Select this encoding when the input tensor is encoded into a 2-dimensional array represented by an encoded tensor.
|
FeatureValueDomain
Domain details of the input feature value. Provides numeric information about the feature, such as its range (min, max). If the feature has been pre-processed, for example with z-scoring, then it provides information about how to recover the original feature. For example, if the input feature is an image and it has been pre-processed to obtain 0-mean and stddev = 1 values, then original_mean, and original_stddev refer to the mean and stddev of the original feature (e.g. image tensor) from which input feature (with mean = 0 and stddev = 1) was obtained.
min_value
float
The minimum permissible value for this feature.
max_value
float
The maximum permissible value for this feature.
original_mean
float
If this input feature has been normalized to a mean value of 0, the original_mean specifies the mean value of the domain prior to normalization.
original_stddev
float
If this input feature has been normalized to a standard deviation of 1.0, the original_stddev specifies the standard deviation of the domain prior to normalization.
Visualization
Visualization configurations for image explanation.
Type of the image visualization. Only applicable to Integrated Gradients attribution
. OUTLINES shows regions of attribution, while PIXELS shows per-pixel attribution. Defaults to OUTLINES.
Whether to only highlight pixels with positive contributions, negative or both. Defaults to POSITIVE.
The color scheme used for the highlighted areas.
Defaults to PINK_GREEN for Integrated Gradients attribution
, which shows positive attributions in green and negative in pink.
Defaults to VIRIDIS for XRAI attribution
, which highlights the most influential regions in yellow and the least influential in blue.
clip_percent_upperbound
float
Excludes attributions above the specified percentile from the highlighted areas. Using the clip_percent_upperbound and clip_percent_lowerbound together can be useful for filtering out noise and making it easier to see areas of strong attribution. Defaults to 99.9.
clip_percent_lowerbound
float
Excludes attributions below the specified percentile, from the highlighted areas. Defaults to 62.
How the original image is displayed in the visualization. Adjusting the overlay can help increase visual clarity if the original image makes it difficult to view the visualization. Defaults to NONE.
ColorMap
The color scheme used for highlighting areas.
Enums | |
---|---|
COLOR_MAP_UNSPECIFIED |
Should not be used. |
PINK_GREEN |
Positive: green. Negative: pink. |
VIRIDIS |
Viridis color map: A perceptually uniform color mapping which is easier to see by those with colorblindness and progresses from yellow to green to blue. Positive: yellow. Negative: blue. |
RED |
Positive: red. Negative: red. |
GREEN |
Positive: green. Negative: green. |
RED_GREEN |
Positive: green. Negative: red. |
PINK_WHITE_GREEN |
PiYG palette. |
OverlayType
How the original image is displayed in the visualization.
Enums | |
---|---|
OVERLAY_TYPE_UNSPECIFIED |
Default value. This is the same as NONE. |
NONE |
No overlay. |
ORIGINAL |
The attributions are shown on top of the original image. |
GRAYSCALE |
The attributions are shown on top of grayscaled version of the original image. |
MASK_BLACK |
The attributions are used as a mask to reveal predictive parts of the image and hide the un-predictive parts. |
Polarity
Whether to only highlight pixels with positive contributions, negative or both. Defaults to POSITIVE.
Enums | |
---|---|
POLARITY_UNSPECIFIED |
Default value. This is the same as POSITIVE. |
POSITIVE |
Highlights the pixels/outlines that were most influential to the model's prediction. |
NEGATIVE |
Setting polarity to negative highlights areas that does not lead to the models's current prediction. |
BOTH |
Shows both positive and negative attributions. |
Type
Type of the image visualization. Only applicable to Integrated Gradients attribution
.
Enums | |
---|---|
TYPE_UNSPECIFIED |
Should not be used. |
PIXELS |
Shows which pixel contributed to the image prediction. |
OUTLINES |
Shows which region contributed to the image prediction by outlining the region. |
OutputMetadata
Metadata of the prediction output to be explained.
output_tensor_name
string
Name of the output tensor. Required and is only applicable to Vertex AI provided images for Tensorflow.
Union field display_name_mapping
. Defines how to map Attribution.output_index
to Attribution.output_display_name
.
If neither of the fields are specified, Attribution.output_display_name
will not be populated. display_name_mapping
can be only one of the following:
Static mapping between the index and display name.
Use this if the outputs are a deterministic n-dimensional array, e.g. a list of scores of all the classes in a pre-defined order for a multi-classification Model. It's not feasible if the outputs are non-deterministic, e.g. the Model produces top-k classes or sort the outputs by their values.
The shape of the value must be an n-dimensional array of strings. The number of dimensions must match that of the outputs to be explained. The Attribution.output_display_name
is populated by locating in the mapping with Attribution.output_index
.
display_name_mapping_key
string
Specify a field name in the prediction to look for the display name.
Use this if the prediction contains the display names for the outputs.
The display names in the prediction must have the same shape of the outputs, so that it can be located by Attribution.output_index
for a specific output.
ExplanationMetadataOverride
The ExplanationMetadata
entries that can be overridden at online explanation
time.
Required. Overrides the input metadata
of the features. The key is the name of the feature to be overridden. The keys specified here must exist in the input metadata to be overridden. If a feature is not specified here, the corresponding feature's input metadata is not overridden.
InputMetadataOverride
The input metadata
entries to be overridden.
Baseline inputs for this feature.
This overrides the input_baseline
field of the ExplanationMetadata.InputMetadata
object of the corresponding feature's input metadata. If it's not specified, the original baselines are not overridden.
ExplanationParameters
Parameters to configure explaining for Model's predictions.
top_k
int32
If populated, returns attributions for top K indices of outputs (defaults to 1). Only applies to Models that predicts more than one outputs (e,g, multi-class Models). When set to -1, returns explanations for all outputs.
If populated, only returns attributions that have output_index
contained in output_indices. It must be an ndarray of integers, with the same shape of the output it's explaining.
If not populated, returns attributions for top_k
indices of outputs. If neither top_k nor output_indices is populated, returns the argmax index of the outputs.
Only applicable to Models that predict multiple outputs (e,g, multi-class Models that predict multiple classes).
Union field method
.
method
can be only one of the following:
An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. Refer to this paper for model details: https://arxiv.org/abs/1306.4265.
An attribution method that computes Aumann-Shapley values taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
An attribution method that redistributes Integrated Gradients attribution to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825
XRAI currently performs better on natural images, like a picture of a house or an animal. If the images are taken in artificial environments, like a lab or manufacturing line, or from diagnostic equipment, like x-rays or quality-control cameras, use Integrated Gradients instead.
Example-based explanations that returns the nearest neighbors from the provided dataset.
ExplanationSpec
Specification of Model explanation.
Required. Parameters that configure explaining of the Model's predictions.
Optional. Metadata describing the Model's input and output for explanation.
ExplanationSpecOverride
The ExplanationSpec
entries that can be overridden at online explanation
time.
The parameters to be overridden. Note that the attribution method cannot be changed. If not specified, no parameter is overridden.
The metadata to be overridden. If not specified, no metadata is overridden.
The example-based explanations parameter overrides.
ExportDataConfig
Describes what part of the Dataset is to be exported, the destination of the export and how to export.
annotations_filter
string
An expression for filtering what part of the Dataset is to be exported. Only Annotations that match this filter will be exported. The filter syntax is the same as in ListAnnotations
.
saved_query_id
string
The ID of a SavedQuery (annotation set) under the Dataset specified by [dataset_id][] used for filtering Annotations for training.
Only used for custom training data export use cases. Only applicable to Datasets that have SavedQueries.
Only Annotations that are associated with this SavedQuery are used in respectively training. When used in conjunction with annotations_filter
, the Annotations used for training are filtered by both saved_query_id
and annotations_filter
.
Only one of saved_query_id
and annotation_schema_uri
should be specified as both of them represent the same thing: problem type.
annotation_schema_uri
string
The Cloud Storage URI that points to a YAML file describing the annotation schema. The schema is defined as an OpenAPI 3.0.2 Schema Object. The schema files that can be used here are found in gs://google-cloud-aiplatform/schema/dataset/annotation/, note that the chosen schema must be consistent with metadata
of the Dataset specified by [dataset_id][].
Only used for custom training data export use cases. Only applicable to Datasets that have DataItems and Annotations.
Only Annotations that both match this schema and belong to DataItems not ignored by the split method are used in respectively training, validation or test role, depending on the role of the DataItem they are on.
When used in conjunction with annotations_filter
, the Annotations used for training are filtered by both annotations_filter
and annotation_schema_uri
.
Indicates the usage of the exported files.
destination
. The destination of the output. destination
can be only one of the following:The Google Cloud Storage location where the output is to be written to. In the given directory a new directory will be created with name: export-data-<dataset-display-name>-<timestamp-of-export-call>
where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. All export output will be written into that directory. Inside that directory, annotations with the same schema will be grouped into sub directories which are named with the corresponding annotations' schema title. Inside these sub directories, a schema.yaml will be created to describe the output format.
split
. The instructions how the export data should be split between the training, validation and test sets. split
can be only one of the following:Split based on fractions defining the size of each set.
Split based on the provided filters for each set.
ExportUse
ExportUse indicates the usage of the exported files. It restricts file destination, format, annotations to be exported, whether to allow unannotated data to be exported and whether to clone files to temp Cloud Storage bucket.
Enums | |
---|---|
EXPORT_USE_UNSPECIFIED |
Regular user export. |
CUSTOM_CODE_TRAINING |
Export for custom code training. |
ExportDataOperationMetadata
Runtime operation information for DatasetService.ExportData
.
The common part of the operation metadata.
gcs_output_directory
string
A Google Cloud Storage directory which path ends with '/'. The exported data is stored in the directory.
ExportDataRequest
Request message for DatasetService.ExportData
.
name
string
Required. The name of the Dataset resource. Format: projects/{project}/locations/{location}/datasets/{dataset}
Required. The desired output location.
ExportDataResponse
Response message for DatasetService.ExportData
.
exported_files[]
string
All of the files that are exported in this export operation. For custom code training export, only three (training, validation and test) Cloud Storage paths in wildcard format are populated (for example, gs://.../training-*).
Only present for custom code training export use case. Records data stats, i.e., train/validation/test item/annotation counts calculated during the export operation.
ExportFeatureValuesOperationMetadata
Details of operations that exports Features values.
Operation metadata for Featurestore export Feature values.
ExportFeatureValuesRequest
Request message for FeaturestoreService.ExportFeatureValues
.
entity_type
string
Required. The resource name of the EntityType from which to export Feature values. Format: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entity_type}
Required. Specifies destination location and format.
Required. Selects Features to export values of.
Per-Feature export settings.
mode
. Required. The mode in which Feature values are exported. mode
can be only one of the following:Exports the latest Feature values of all entities of the EntityType within a time range.
Exports all historical values of all entities of the EntityType within a time range
FullExport
Describes exporting all historical Feature values of all entities of the EntityType between [start_time, end_time].
Excludes Feature values with feature generation timestamp before this timestamp. If not set, retrieve oldest values kept in Feature Store. Timestamp, if present, must not have higher than millisecond precision.
Exports Feature values as of this timestamp. If not set, retrieve values as of now. Timestamp, if present, must not have higher than millisecond precision.
SnapshotExport
Describes exporting the latest Feature values of all entities of the EntityType between [start_time, snapshot_time].
Exports Feature values as of this timestamp. If not set, retrieve values as of now. Timestamp, if present, must not have higher than millisecond precision.
Excludes Feature values with feature generation timestamp before this timestamp. If not set, retrieve oldest values kept in Feature Store. Timestamp, if present, must not have higher than millisecond precision.
ExportFeatureValuesResponse
This type has no fields.
Response message for FeaturestoreService.ExportFeatureValues
.
ExportFilterSplit
Assigns input data to training, validation, and test sets based on the given filters, data pieces not matched by any filter are ignored. Currently only supported for Datasets containing DataItems. If any of the filters in this message are to match nothing, then they can be set as '-' (the minus sign).
Supported only for unstructured Datasets.
training_filter
string
Required. A filter on DataItems of the Dataset. DataItems that match this filter are used to train the Model. A filter with same syntax as the one used in DatasetService.ListDataItems
may be used. If a single DataItem is matched by more than one of the FilterSplit filters, then it is assigned to the first set that applies to it in the training, validation, test order.
validation_filter
string
Required. A filter on DataItems of the Dataset. DataItems that match this filter are used to validate the Model. A filter with same syntax as the one used in DatasetService.ListDataItems
may be used. If a single DataItem is matched by more than one of the FilterSplit filters, then it is assigned to the first set that applies to it in the training, validation, test order.
test_filter
string
Required. A filter on DataItems of the Dataset. DataItems that match this filter are used to test the Model. A filter with same syntax as the one used in DatasetService.ListDataItems
may be used. If a single DataItem is matched by more than one of the FilterSplit filters, then it is assigned to the first set that applies to it in the training, validation, test order.
ExportFractionSplit
Assigns the input data to training, validation, and test sets as per the given fractions. Any of training_fraction
, validation_fraction
and test_fraction
may optionally be provided, they must sum to up to 1. If the provided ones sum to less than 1, the remainder is assigned to sets as decided by Vertex AI. If none of the fractions are set, by default roughly 80% of data is used for training, 10% for validation, and 10% for test.
training_fraction
double
The fraction of the input data that is to be used to train the Model.
validation_fraction
double
The fraction of the input data that is to be used to validate the Model.
test_fraction
double
The fraction of the input data that is to be used to evaluate the Model.
ExportModelOperationMetadata
Details of ModelService.ExportModel
operation.
The common part of the operation metadata.
Output only. Information further describing the output of this Model export.
OutputInfo
Further describes the output of the ExportModel. Supplements ExportModelRequest.OutputConfig
.
artifact_output_uri
string
Output only. If the Model artifact is being exported to Google Cloud Storage this is the full path of the directory created, into which the Model files are being written to.
image_output_uri
string
Output only. If the Model image is being exported to Google Container Registry or Artifact Registry this is the full path of the image created.
ExportModelRequest
Request message for ModelService.ExportModel
.
name
string
Required. The resource name of the Model to export. The resource name may contain version id or version alias to specify the version, if no version is specified, the default version will be exported.
Required. The desired output location and configuration.
OutputConfig
Output configuration for the Model export.
export_format_id
string
The ID of the format in which the Model must be exported. Each Model lists the export formats it supports
. If no value is provided here, then the first from the list of the Model's supported formats is used by default.
The Cloud Storage location where the Model artifact is to be written to. Under the directory given as the destination a new one with name "model-export-<model-display-name>-<timestamp-of-export-call>
", where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format, will be created. Inside, the Model and any of its supporting files will be written. This field should only be set when the exportableContent
field of the [Model.supported_export_formats] object contains ARTIFACT
.
The Google Container Registry or Artifact Registry uri where the Model container image will be copied to. This field should only be set when the exportableContent
field of the [Model.supported_export_formats] object contains IMAGE
.
ExportModelResponse
This type has no fields.
Response message of ModelService.ExportModel
operation.
ExportTensorboardTimeSeriesDataRequest
Request message for TensorboardService.ExportTensorboardTimeSeriesData
.
tensorboard_time_series
string
Required. The resource name of the TensorboardTimeSeries to export data from. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}/runs/{run}/timeSeries/{time_series}
filter
string
Exports the TensorboardTimeSeries' data that match the filter expression.
page_size
int32
The maximum number of data points to return per page. The default page_size is 1000. Values must be between 1 and 10000. Values above 10000 are coerced to 10000.
page_token
string
A page token, received from a previous ExportTensorboardTimeSeriesData
call. Provide this to retrieve the subsequent page.
When paginating, all other parameters provided to ExportTensorboardTimeSeriesData
must match the call that provided the page token.
order_by
string
Field to use to sort the TensorboardTimeSeries' data. By default, TensorboardTimeSeries' data is returned in a pseudo random order.
ExportTensorboardTimeSeriesDataResponse
Response message for TensorboardService.ExportTensorboardTimeSeriesData
.
The returned time series data points.
next_page_token
string
A token, which can be sent as page_token
to retrieve the next page. If this field is omitted, there are no subsequent pages.
Feature
Feature Metadata information. For example, color is a feature that describes an apple.
name
string
Immutable. Name of the Feature. Format: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entity_type}/features/{feature}
projects/{project}/locations/{location}/featureGroups/{feature_group}/features/{feature}
The last part feature is assigned by the client. The feature can be up to 64 characters long and can consist only of ASCII Latin letters A-Z and a-z, underscore(_), and ASCII digits 0-9 starting with a letter. The value will be unique given an entity type.
description
string
Description of the Feature.
Immutable. Only applicable for Vertex AI Feature Store (Legacy). Type of Feature value.
Output only. Only applicable for Vertex AI Feature Store (Legacy). Timestamp when this EntityType was created.
Output only. Only applicable for Vertex AI Feature Store (Legacy). Timestamp when this EntityType was most recently updated.
labels
map<string, string>
Optional. The labels with user-defined metadata to organize your Features.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information on and examples of labels. No more than 64 user labels can be associated with one Feature (System labels are excluded)." System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable.
etag
string
Used to perform a consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
disable_monitoring
bool
Optional. Only applicable for Vertex AI Feature Store (Legacy). If not set, use the monitoring_config defined for the EntityType this Feature belongs to. Only Features with type (Feature.ValueType
) BOOL, STRING, DOUBLE or INT64 can enable monitoring.
If set to true, all types of data monitoring are disabled despite the config on EntityType.
Output only. Only applicable for Vertex AI Feature Store (Legacy). The list of historical stats and anomalies with specified objectives.
version_column_name
string
Only applicable for Vertex AI Feature Store. The name of the BigQuery Table/View column hosting data for this version. If no value is provided, will use feature_id.
point_of_contact
string
Entity responsible for maintaining this feature. Can be comma separated list of email addresses or URIs.
MonitoringStatsAnomaly
A list of historical SnapshotAnalysis
or ImportFeaturesAnalysis
stats requested by user, sorted by FeatureStatsAnomaly.start_time
descending.
Output only. The objective for each stats.
Output only. The stats and anomalies generated at specific timestamp.
Objective
If the objective in the request is both Import Feature Analysis and Snapshot Analysis, this objective could be one of them. Otherwise, this objective should be the same as the objective in the request.
Enums | |
---|---|
OBJECTIVE_UNSPECIFIED |
If it's OBJECTIVE_UNSPECIFIED, monitoring_stats will be empty. |
IMPORT_FEATURE_ANALYSIS |
Stats are generated by Import Feature Analysis. |
SNAPSHOT_ANALYSIS |
Stats are generated by Snapshot Analysis. |
ValueType
Only applicable for Vertex AI Legacy Feature Store. An enum representing the value type of a feature.
Enums | |
---|---|
VALUE_TYPE_UNSPECIFIED |
The value type is unspecified. |
BOOL |
Used for Feature that is a boolean. |
BOOL_ARRAY |
Used for Feature that is a list of boolean. |
DOUBLE |
Used for Feature that is double. |
DOUBLE_ARRAY |
Used for Feature that is a list of double. |
INT64 |
Used for Feature that is INT64. |
INT64_ARRAY |
Used for Feature that is a list of INT64. |
STRING |
Used for Feature that is string. |
STRING_ARRAY |
Used for Feature that is a list of String. |
BYTES |
Used for Feature that is bytes. |
STRUCT |
Used for Feature that is struct. |
FeatureGroup
Vertex AI Feature Group.
name
string
Identifier. Name of the FeatureGroup. Format: projects/{project}/locations/{location}/featureGroups/{featureGroup}
Output only. Timestamp when this FeatureGroup was created.
Output only. Timestamp when this FeatureGroup was last updated.
etag
string
Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
labels
map<string, string>
Optional. The labels with user-defined metadata to organize your FeatureGroup.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information on and examples of labels. No more than 64 user labels can be associated with one FeatureGroup(System labels are excluded)." System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable.
description
string
Optional. Description of the FeatureGroup.
Union field source
.
source
can be only one of the following:
Indicates that features for this group come from BigQuery Table/View. By default treats the source as a sparse time series source. The BigQuery source table or view must have at least one entity ID column and a column named feature_timestamp
.
BigQuery
Input source type for BigQuery Tables and Views.
Required. Immutable. The BigQuery source URI that points to either a BigQuery Table or View.
static_data_source
bool
Optional. Set if the data source is not a time-series.
dense
bool
Optional. If set, all feature values will be fetched from a single row per unique entityId including nulls. If not set, will collapse all rows for each unique entityId into a singe row with any non-null values if present, if no non-null values are present will sync null. ex: If source has schema (entity_id, feature_timestamp, f0, f1)
and the following rows: (e1, 2020-01-01T10:00:00.123Z, 10, 15)
(e1, 2020-02-01T10:00:00.123Z, 20, null)
If dense is set, (e1, 20, null)
is synced to online stores. If dense is not set, (e1, 20, 15)
is synced to online stores.
FeatureNoiseSigma
Noise sigma by features. Noise sigma represents the standard deviation of the gaussian kernel that will be used to add noise to interpolated inputs prior to computing gradients.
Noise sigma per feature. No noise is added to features that are not set.
NoiseSigmaForFeature
Noise sigma for a single feature.
name
string
The name of the input feature for which noise sigma is provided. The features are defined in explanation metadata inputs
.
sigma
float
This represents the standard deviation of the Gaussian kernel that will be used to add noise to the feature prior to computing gradients. Similar to noise_sigma
but represents the noise added to the current feature. Defaults to 0.1.
FeatureOnlineStore
Vertex AI Feature Online Store provides a centralized repository for serving ML features and embedding indexes at low latency. The Feature Online Store is a top-level container.
name
string
Identifier. Name of the FeatureOnlineStore. Format: projects/{project}/locations/{location}/featureOnlineStores/{featureOnlineStore}
Output only. Timestamp when this FeatureOnlineStore was created.
Output only. Timestamp when this FeatureOnlineStore was last updated.
etag
string
Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
labels
map<string, string>
Optional. The labels with user-defined metadata to organize your FeatureOnlineStore.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information on and examples of labels. No more than 64 user labels can be associated with one FeatureOnlineStore(System labels are excluded)." System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable.
Output only. State of the featureOnlineStore.
Optional. The dedicated serving endpoint for this FeatureOnlineStore, which is different from common Vertex service endpoint.
Optional. Customer-managed encryption key spec for data storage. If set, online store will be secured by this key.
satisfies_pzs
bool
Output only. Reserved for future use.
satisfies_pzi
bool
Output only. Reserved for future use.
Union field storage_type
.
storage_type
can be only one of the following:
Contains settings for the Cloud Bigtable instance that will be created to serve featureValues for all FeatureViews under this FeatureOnlineStore.
Contains settings for the Optimized store that will be created to serve featureValues for all FeatureViews under this FeatureOnlineStore. When choose Optimized storage type, need to set PrivateServiceConnectConfig.enable_private_service_connect
to use private endpoint. Otherwise will use public endpoint by default.
Bigtable
Required. Autoscaling config applied to Bigtable Instance.
AutoScaling
min_node_count
int32
Required. The minimum number of nodes to scale down to. Must be greater than or equal to 1.
max_node_count
int32
Required. The maximum number of nodes to scale up to. Must be greater than or equal to min_node_count, and less than or equal to 10 times of 'min_node_count'.
cpu_utilization_target
int32
Optional. A percentage of the cluster's CPU capacity. Can be from 10% to 80%. When a cluster's CPU utilization exceeds the target that you have set, Bigtable immediately adds nodes to the cluster. When CPU utilization is substantially lower than the target, Bigtable removes nodes. If not set will default to 50%.
DedicatedServingEndpoint
The dedicated serving endpoint for this FeatureOnlineStore. Only need to set when you choose Optimized storage type. Public endpoint is provisioned by default.
public_endpoint_domain_name
string
Output only. This field will be populated with the domain name to use for this FeatureOnlineStore
Optional. Private service connect config. The private service connection is available only for Optimized storage type, not for embedding management now. If PrivateServiceConnectConfig.enable_private_service_connect
set to true, customers will use private service connection to send request. Otherwise, the connection will set to public endpoint.
service_attachment
string
Output only. The name of the service attachment resource. Populated if private service connect is enabled and after FeatureViewSync is created.
Optimized
This type has no fields.
Optimized storage type
State
Possible states a featureOnlineStore can have.
Enums | |
---|---|
STATE_UNSPECIFIED |
Default value. This value is unused. |
STABLE |
State when the featureOnlineStore configuration is not being updated and the fields reflect the current configuration of the featureOnlineStore. The featureOnlineStore is usable in this state. |
UPDATING |
The state of the featureOnlineStore configuration when it is being updated. During an update, the fields reflect either the original configuration or the updated configuration of the featureOnlineStore. The featureOnlineStore is still usable in this state. |
FeatureSelector
Selector for Features of an EntityType.
Required. Matches Features based on ID.
FeatureStatsAnomaly
Stats and Anomaly generated at specific timestamp for specific Feature. The start_time and end_time are used to define the time range of the dataset that current stats belongs to, e.g. prediction traffic is bucketed into prediction datasets by time window. If the Dataset is not defined by time window, start_time = end_time. Timestamp of the stats and anomalies always refers to end_time. Raw stats and anomalies are stored in stats_uri or anomaly_uri in the tensorflow defined protos. Field data_stats contains almost identical information with the raw stats in Vertex AI defined proto, for UI to display.
score
double
Feature importance score, only populated when cross-feature monitoring is enabled. For now only used to represent feature attribution score within range [0, 1] for ModelDeploymentMonitoringObjectiveType.FEATURE_ATTRIBUTION_SKEW
and ModelDeploymentMonitoringObjectiveType.FEATURE_ATTRIBUTION_DRIFT
.
stats_uri
string
Path of the stats file for current feature values in Cloud Storage bucket. Format: gs://
anomaly_uri
string
Path of the anomaly file for current feature values in Cloud Storage bucket. Format: gs://
distribution_deviation
double
Deviation from the current stats to baseline stats. 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence.
anomaly_detection_threshold
double
This is the threshold used when detecting anomalies. The threshold can be changed by user, so this one might be different from ThresholdConfig.value
.
The start timestamp of window where stats were generated. For objectives where time window doesn't make sense (e.g. Featurestore Snapshot Monitoring), start_time is only used to indicate the monitoring intervals, so it always equals to (end_time - monitoring_interval).
The end timestamp of window where stats were generated. For objectives where time window doesn't make sense (e.g. Featurestore Snapshot Monitoring), end_time indicates the timestamp of the data used to generate stats (e.g. timestamp we take snapshots for feature values).
FeatureValue
Value for a feature.
Metadata of feature value.
value
. Value for the feature. value
can be only one of the following:bool_value
bool
Bool type feature value.
double_value
double
Double type feature value.
int64_value
int64
Int64 feature value.
string_value
string
String feature value.
A list of bool type feature value.
A list of double type feature value.
A list of int64 type feature value.
A list of string type feature value.
bytes_value
bytes
Bytes feature value.
A struct type feature value.
Metadata
Metadata of feature value.
Feature generation timestamp. Typically, it is provided by user at feature ingestion time. If not, feature store will use the system timestamp when the data is ingested into feature store. For streaming ingestion, the time, aligned by days, must be no older than five years (1825 days) and no later than one year (366 days) in the future.
FeatureValueDestination
A destination location for Feature values and format.
Union field destination
.
destination
can be only one of the following:
Output in BigQuery format. BigQueryDestination.output_uri
in FeatureValueDestination.bigquery_destination
must refer to a table.
Output in TFRecord format.
Below are the mapping from Feature value type in Featurestore to Feature value type in TFRecord:
Value type in Featurestore | Value type in TFRecord
DOUBLE, DOUBLE_ARRAY | FLOAT_LIST
INT64, INT64_ARRAY | INT64_LIST
STRING, STRING_ARRAY, BYTES | BYTES_LIST
true -> byte_string("true"), false -> byte_string("false")
BOOL, BOOL_ARRAY (true, false) | BYTES_LIST
Output in CSV format. Array Feature value types are not allowed in CSV format.
FeatureValueList
Container for list of values.
A list of feature values. All of them should be the same data type.
FeatureView
FeatureView is representation of values that the FeatureOnlineStore will serve based on its syncConfig.
name
string
Identifier. Name of the FeatureView. Format: projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}/featureViews/{feature_view}
Output only. Timestamp when this FeatureView was created.
Output only. Timestamp when this FeatureView was last updated.
etag
string
Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
labels
map<string, string>
Optional. The labels with user-defined metadata to organize your FeatureViews.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information on and examples of labels. No more than 64 user labels can be associated with one FeatureOnlineStore(System labels are excluded)." System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable.
Configures when data is to be synced/updated for this FeatureView. At the end of the sync the latest featureValues for each entityId of this FeatureView are made ready for online serving.
Optional. Configuration for index preparation for vector search. It contains the required configurations to create an index from source data, so that approximate nearest neighbor (a.k.a ANN) algorithms search can be performed during online serving.
satisfies_pzs
bool
Output only. Reserved for future use.
satisfies_pzi
bool
Output only. Reserved for future use.
Union field source
.
source
can be only one of the following:
Optional. Configures how data is supposed to be extracted from a BigQuery source to be loaded onto the FeatureOnlineStore.
Optional. Configures the features from a Feature Registry source that need to be loaded onto the FeatureOnlineStore.
Optional. The Vertex RAG Source that the FeatureView is linked to.
BigQuerySource
uri
string
Required. The BigQuery view URI that will be materialized on each sync trigger based on FeatureView.SyncConfig.
entity_id_columns[]
string
Required. Columns to construct entity_id / row keys.
FeatureRegistrySource
A Feature Registry source for features that need to be synced to Online Store.
Required. List of features that need to be synced to Online Store.
project_number
int64
Optional. The project number of the parent project of the Feature Groups.
FeatureGroup
Features belonging to a single feature group that will be synced to Online Store.
feature_group_id
string
Required. Identifier of the feature group.
feature_ids[]
string
Required. Identifiers of features under the feature group.
IndexConfig
Configuration for vector indexing.
embedding_column
string
Optional. Column of embedding. This column contains the source data to create index for vector search. embedding_column must be set when using vector search.
filter_columns[]
string
Optional. Columns of features that're used to filter vector search results.
crowding_column
string
Optional. Column of crowding. This column contains crowding attribute which is a constraint on a neighbor list produced by FeatureOnlineStoreService.SearchNearestEntities
to diversify search results. If NearestNeighborQuery.per_crowding_attribute_neighbor_count
is set to K in SearchNearestEntitiesRequest
, it's guaranteed that no more than K entities of the same crowding attribute are returned in the response.
Optional. The distance measure used in nearest neighbor search.
algorithm_config
. The configuration with regard to the algorithms used for efficient search. algorithm_config
can be only one of the following:Optional. Configuration options for the tree-AH algorithm (Shallow tree + Asymmetric Hashing). Please refer to this paper for more details: https://arxiv.org/abs/1908.10396
Optional. Configuration options for using brute force search, which simply implements the standard linear search in the database for each query. It is primarily meant for benchmarking and to generate the ground truth for approximate search.
embedding_dimension
int32
Optional. The number of dimensions of the input embedding.
BruteForceConfig
This type has no fields.
Configuration options for using brute force search.
DistanceMeasureType
The distance measure used in nearest neighbor search.
Enums | |
---|---|
DISTANCE_MEASURE_TYPE_UNSPECIFIED |
Should not be set. |
SQUARED_L2_DISTANCE |
Euclidean (L_2) Distance. |
COSINE_DISTANCE |
Cosine Distance. Defined as 1 - cosine similarity. We strongly suggest using DOT_PRODUCT_DISTANCE + UNIT_L2_NORM instead of COSINE distance. Our algorithms have been more optimized for DOT_PRODUCT distance which, when combined with UNIT_L2_NORM, is mathematically equivalent to COSINE distance and results in the same ranking. |
DOT_PRODUCT_DISTANCE |
Dot Product Distance. Defined as a negative of the dot product. |
TreeAHConfig
Configuration options for the tree-AH algorithm.
leaf_node_embedding_count
int64
Optional. Number of embeddings on each leaf node. The default value is 1000 if not set.
SyncConfig
Configuration for Sync. Only one option is set.
cron
string
Cron schedule (https://en.wikipedia.org/wiki/Cron) to launch scheduled runs. To explicitly set a timezone to the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database. For example, "CRON_TZ=America/New_York 1 * * * *", or "TZ=America/New_York 1 * * * *".
continuous
bool
Optional. If true, syncs the FeatureView in a continuous manner to Online Store.
VertexRagSource
A Vertex Rag source for features that need to be synced to Online Store.
uri
string
Required. The BigQuery view/table URI that will be materialized on each manual sync trigger. The table/view is expected to have the following columns and types at least: - corpus_id
(STRING, NULLABLE/REQUIRED) - file_id
(STRING, NULLABLE/REQUIRED) - chunk_id
(STRING, NULLABLE/REQUIRED) - chunk_data_type
(STRING, NULLABLE/REQUIRED) - chunk_data
(STRING, NULLABLE/REQUIRED) - embeddings
(FLOAT, REPEATED) - file_original_uri
(STRING, NULLABLE/REQUIRED)
rag_corpus_id
int64
Optional. The RAG corpus id corresponding to this FeatureView.
FeatureViewDataFormat
Format of the data in the Feature View.
Enums | |
---|---|
FEATURE_VIEW_DATA_FORMAT_UNSPECIFIED |
Not set. Will be treated as the KeyValue format. |
KEY_VALUE |
Return response data in key-value format. |
PROTO_STRUCT |
Return response data in proto Struct format. |
FeatureViewDataKey
Lookup key for a feature view.
Union field key_oneof
.
key_oneof
can be only one of the following:
key
string
String key to use for lookup.
The actual Entity ID will be composed from this struct. This should match with the way ID is defined in the FeatureView spec.
CompositeKey
ID that is comprised from several parts (columns).
parts[]
string
Parts to construct Entity ID. Should match with the same ID columns as defined in FeatureView in the same order.
FeatureViewSync
FeatureViewSync is a representation of sync operation which copies data from data source to Feature View in Online Store.
name
string
Identifier. Name of the FeatureViewSync. Format: projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}/featureViews/{feature_view}/featureViewSyncs/{feature_view_sync}
Output only. Time when this FeatureViewSync is created. Creation of a FeatureViewSync means that the job is pending / waiting for sufficient resources but may not have started the actual data transfer yet.
Output only. Time when this FeatureViewSync is finished.
Output only. Final status of the FeatureViewSync.
Output only. Summary of the sync job.
satisfies_pzs
bool
Output only. Reserved for future use.
satisfies_pzi
bool
Output only. Reserved for future use.
SyncSummary
Summary from the Sync job. For continuous syncs, the summary is updated periodically. For batch syncs, it gets updated on completion of the sync.
row_synced
int64
Output only. Total number of rows synced.
total_slot
int64
Output only. BigQuery slot milliseconds consumed for the sync job.
Lower bound of the system time watermark for the sync job. This is only set for continuously syncing feature views.
Featurestore
Vertex AI Feature Store provides a centralized repository for organizing, storing, and serving ML features. The Featurestore is a top-level container for your features and their values.
name
string
Output only. Name of the Featurestore. Format: projects/{project}/locations/{location}/featurestores/{featurestore}
Output only. Timestamp when this Featurestore was created.
Output only. Timestamp when this Featurestore was last updated.
etag
string
Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
labels
map<string, string>
Optional. The labels with user-defined metadata to organize your Featurestore.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information on and examples of labels. No more than 64 user labels can be associated with one Featurestore(System labels are excluded)." System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable.
Optional. Config for online storage resources. The field should not co-exist with the field of OnlineStoreReplicationConfig
. If both of it and OnlineStoreReplicationConfig are unset, the feature store will not have an online store and cannot be used for online serving.
Output only. State of the featurestore.
online_storage_ttl_days
int32
Optional. TTL in days for feature values that will be stored in online serving storage. The Feature Store online storage periodically removes obsolete feature values older than online_storage_ttl_days
since the feature generation time. Note that online_storage_ttl_days
should be less than or equal to offline_storage_ttl_days
for each EntityType under a featurestore. If not set, default to 4000 days
Optional. Customer-managed encryption key spec for data storage. If set, both of the online and offline data storage will be secured by this key.
satisfies_pzs
bool
Output only. Reserved for future use.
satisfies_pzi
bool
Output only. Reserved for future use.
OnlineServingConfig
OnlineServingConfig specifies the details for provisioning online serving resources.
fixed_node_count
int32
The number of nodes for the online store. The number of nodes doesn't scale automatically, but you can manually update the number of nodes. If set to 0, the featurestore will not have an online store and cannot be used for online serving.
Online serving scaling configuration. Only one of fixed_node_count
and scaling
can be set. Setting one will reset the other.
Scaling
Online serving scaling configuration. If min_node_count and max_node_count are set to the same value, the cluster will be configured with the fixed number of node (no auto-scaling).
min_node_count
int32
Required. The minimum number of nodes to scale down to. Must be greater than or equal to 1.
max_node_count
int32
The maximum number of nodes to scale up to. Must be greater than min_node_count, and less than or equal to 10 times of 'min_node_count'.
cpu_utilization_target
int32
Optional. The cpu utilization that the Autoscaler should be trying to achieve. This number is on a scale from 0 (no utilization) to 100 (total utilization), and is limited between 10 and 80. When a cluster's CPU utilization exceeds the target that you have set, Bigtable immediately adds nodes to the cluster. When CPU utilization is substantially lower than the target, Bigtable removes nodes. If not set or set to 0, default to 50.
State
Possible states a featurestore can have.
Enums | |
---|---|
STATE_UNSPECIFIED |
Default value. This value is unused. |
STABLE |
State when the featurestore configuration is not being updated and the fields reflect the current configuration of the featurestore. The featurestore is usable in this state. |
UPDATING |
The state of the featurestore configuration when it is being updated. During an update, the fields reflect either the original configuration or the updated configuration of the featurestore. For example, online_serving_config.fixed_node_count can take minutes to update. While the update is in progress, the featurestore is in the UPDATING state, and the value of fixed_node_count can be the original value or the updated value, depending on the progress of the operation. Until the update completes, the actual number of nodes can still be the original value of fixed_node_count . The featurestore is still usable in this state. |
FeaturestoreMonitoringConfig
Configuration of how features in Featurestore are monitored.
The config for Snapshot Analysis Based Feature Monitoring.
The config for ImportFeatures Analysis Based Feature Monitoring.
Threshold for numerical features of anomaly detection. This is shared by all objectives of Featurestore Monitoring for numerical features (i.e. Features with type (Feature.ValueType
) DOUBLE or INT64).
Threshold for categorical features of anomaly detection. This is shared by all types of Featurestore Monitoring for categorical features (i.e. Features with type (Feature.ValueType
) BOOL or STRING).
ImportFeaturesAnalysis
Configuration of the Featurestore's ImportFeature Analysis Based Monitoring. This type of analysis generates statistics for values of each Feature imported by every ImportFeatureValues
operation.
Whether to enable / disable / inherite default hebavior for import features analysis.
The baseline used to do anomaly detection for the statistics generated by import features analysis.
Baseline
Defines the baseline to do anomaly detection for feature values imported by each ImportFeatureValues
operation.
Enums | |
---|---|
BASELINE_UNSPECIFIED |
Should not be used. |
LATEST_STATS |
Choose the later one statistics generated by either most recent snapshot analysis or previous import features analysis. If non of them exists, skip anomaly detection and only generate a statistics. |
MOST_RECENT_SNAPSHOT_STATS |
Use the statistics generated by the most recent snapshot analysis if exists. |
PREVIOUS_IMPORT_FEATURES_STATS |
Use the statistics generated by the previous import features analysis if exists. |
State
The state defines whether to enable ImportFeature analysis.
Enums | |
---|---|
STATE_UNSPECIFIED |
Should not be used. |
DEFAULT |
The default behavior of whether to enable the monitoring. EntityType-level config: disabled. Feature-level config: inherited from the configuration of EntityType this Feature belongs to. |
ENABLED |
Explicitly enables import features analysis. EntityType-level config: by default enables import features analysis for all Features under it. Feature-level config: enables import features analysis regardless of the EntityType-level config. |
DISABLED |
Explicitly disables import features analysis. EntityType-level config: by default disables import features analysis for all Features under it. Feature-level config: disables import features analysis regardless of the EntityType-level config. |
SnapshotAnalysis
Configuration of the Featurestore's Snapshot Analysis Based Monitoring. This type of analysis generates statistics for each Feature based on a snapshot of the latest feature value of each entities every monitoring_interval.
disabled
bool
The monitoring schedule for snapshot analysis. For EntityType-level config: unset / disabled = true indicates disabled by default for Features under it; otherwise by default enable snapshot analysis monitoring with monitoring_interval for Features under it. Feature-level config: disabled = true indicates disabled regardless of the EntityType-level config; unset monitoring_interval indicates going with EntityType-level config; otherwise run snapshot analysis monitoring with monitoring_interval regardless of the EntityType-level config. Explicitly Disable the snapshot analysis based monitoring.
monitoring_interval_days
int32
Configuration of the snapshot analysis based monitoring pipeline running interval. The value indicates number of days.
staleness_days
int32
Customized export features time window for snapshot analysis. Unit is one day. Default value is 3 weeks. Minimum value is 1 day. Maximum value is 4000 days.
ThresholdConfig
The config for Featurestore Monitoring threshold.
Union field threshold
.
threshold
can be only one of the following:
value
double
Specify a threshold value that can trigger the alert. 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
FetchFeatureValuesRequest
Request message for FeatureOnlineStoreService.FetchFeatureValues
. All the features under the requested feature view will be returned.
feature_view
string
Required. FeatureView resource format projects/{project}/locations/{location}/featureOnlineStores/{featureOnlineStore}/featureViews/{featureView}
Optional. The request key to fetch feature values for.
Optional. Response data format. If not set, FeatureViewDataFormat.KEY_VALUE
will be used.
FetchFeatureValuesResponse
Response message for FeatureOnlineStoreService.FetchFeatureValues
The data key associated with this response. Will only be populated for [FeatureOnlineStoreService.StreamingFetchFeatureValues][] RPCs.
Union field format
.
format
can be only one of the following:
Feature values in KeyValue format.
Feature values in proto Struct format.
FeatureNameValuePairList
Response structure in the format of key (feature name) and (feature) value pair.
List of feature names and values.
FeatureNameValuePair
Feature name & value pair.
name
string
Feature short name.
Union field data
.
data
can be only one of the following:
Feature value.
FileData
URI based data.
mime_type
string
Required. The IANA standard MIME type of the source data.
file_uri
string
Required. URI.
FilterSplit
Assigns input data to training, validation, and test sets based on the given filters, data pieces not matched by any filter are ignored. Currently only supported for Datasets containing DataItems. If any of the filters in this message are to match nothing, then they can be set as '-' (the minus sign).
Supported only for unstructured Datasets.
training_filter
string
Required. A filter on DataItems of the Dataset. DataItems that match this filter are used to train the Model. A filter with same syntax as the one used in DatasetService.ListDataItems
may be used. If a single DataItem is matched by more than one of the FilterSplit filters, then it is assigned to the first set that applies to it in the training, validation, test order.
validation_filter
string
Required. A filter on DataItems of the Dataset. DataItems that match this filter are used to validate the Model. A filter with same syntax as the one used in DatasetService.ListDataItems
may be used. If a single DataItem is matched by more than one of the FilterSplit filters, then it is assigned to the first set that applies to it in the training, validation, test order.
test_filter
string
Required. A filter on DataItems of the Dataset. DataItems that match this filter are used to test the Model. A filter with same syntax as the one used in DatasetService.ListDataItems
may be used. If a single DataItem is matched by more than one of the FilterSplit filters, then it is assigned to the first set that applies to it in the training, validation, test order.
FluencyInput
Input for fluency metric.
Required. Spec for fluency score metric.
Required. Fluency instance.
FluencyInstance
Spec for fluency instance.
prediction
string
Required. Output of the evaluated model.
FluencyResult
Spec for fluency result.
explanation
string
Output only. Explanation for fluency score.
score
float
Output only. Fluency score.
confidence
float
Output only. Confidence for fluency score.
FluencySpec
Spec for fluency score metric.
version
int32
Optional. Which version to use for evaluation.
FractionSplit
Assigns the input data to training, validation, and test sets as per the given fractions. Any of training_fraction
, validation_fraction
and test_fraction
may optionally be provided, they must sum to up to 1. If the provided ones sum to less than 1, the remainder is assigned to sets as decided by Vertex AI. If none of the fractions are set, by default roughly 80% of data is used for training, 10% for validation, and 10% for test.
training_fraction
double
The fraction of the input data that is to be used to train the Model.
validation_fraction
double
The fraction of the input data that is to be used to validate the Model.
test_fraction
double
The fraction of the input data that is to be used to evaluate the Model.
FulfillmentInput
Input for fulfillment metric.
Required. Spec for fulfillment score metric.
Required. Fulfillment instance.
FulfillmentInstance
Spec for fulfillment instance.
prediction
string
Required. Output of the evaluated model.
instruction
string
Required. Inference instruction prompt to compare prediction with.
FulfillmentResult
Spec for fulfillment result.
explanation
string
Output only. Explanation for fulfillment score.
score
float
Output only. Fulfillment score.
confidence
float
Output only. Confidence for fulfillment score.
FulfillmentSpec
Spec for fulfillment metric.
version
int32
Optional. Which version to use for evaluation.
FunctionCall
A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values.
name
string
Required. The name of the function to call. Matches [FunctionDeclaration.name].
Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
FunctionCallingConfig
Function calling config.
Optional. Function calling mode.
allowed_function_names[]
string
Optional. Function names to call. Only set when the Mode is ANY. Function names should match [FunctionDeclaration.name]. With mode set to ANY, model will predict a function call from the set of function names provided.
Mode
Function calling mode.
Enums | |
---|---|
MODE_UNSPECIFIED |
Unspecified function calling mode. This value should not be used. |
AUTO |
Default model behavior, model decides to predict either function calls or natural language response. |
ANY |
Model is constrained to always predicting function calls only. If "allowed_function_names" are set, the predicted function calls will be limited to any one of "allowed_function_names", else the predicted function calls will be any one of the provided "function_declarations". |
NONE |
Model will not predict any function calls. Model behavior is same as when not passing any function declarations. |
FunctionDeclaration
Structured representation of a function declaration as defined by the OpenAPI 3.0 specification. Included in this declaration are the function name and parameters. This FunctionDeclaration is a representation of a block of code that can be used as a Tool
by the model and executed by the client.
name
string
Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots and dashes, with a maximum length of 64.
description
string
Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
FunctionResponse
The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction.
name
string
Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
GcsDestination
The Google Cloud Storage location where the output is to be written to.
output_uri_prefix
string
Required. Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
GcsSource
The Google Cloud Storage location for the input content.
uris[]
string
Required. Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
GenerateContentRequest
Request message for [PredictionService.GenerateContent].
model
string
Required. The fully qualified name of the publisher model or tuned model endpoint to use.
Publisher model format: projects/{project}/locations/{location}/publishers/*/models/*
Tuned model endpoint format: projects/{project}/locations/{location}/endpoints/{endpoint}
Required. The content of the current conversation with the model.
For single-turn queries, this is a single instance. For multi-turn queries, this is a repeated field that contains conversation history + latest request.
Optional. A list of Tools
the model may use to generate the next response.
A Tool
is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model.
Optional. Tool config. This config is shared for all tools provided in the request.
labels
map<string, string>
Optional. The labels with user-defined metadata for the request. It is used for billing and reporting only.
Label keys and values can be no longer than 63 characters (Unicode codepoints) and can only contain lowercase letters, numeric characters, underscores, and dashes. International characters are allowed. Label values are optional. Label keys must start with a letter.
Optional. Per request settings for blocking unsafe content. Enforced on GenerateContentResponse.candidates.
Optional. Generation config.
Optional. The user provided system instructions for the model. Note: only text should be used in parts and content in each part will be in a separate paragraph.
GenerateContentResponse
Response message for [PredictionService.GenerateContent].
Output only. Generated candidates.
model_version
string
Output only. The model version used to generate the response.
Output only. Content filter results for a prompt sent in the request. Note: Sent only in the first stream chunk. Only happens when no candidates were generated due to content violations.
Usage metadata about the response(s).
PromptFeedback
Content filter results for a prompt sent in the request.
Output only. Blocked reason.
Output only. Safety ratings.
block_reason_message
string
Output only. A readable block reason message.
BlockedReason
Blocked reason enumeration.
Enums | |
---|---|
BLOCKED_REASON_UNSPECIFIED |
Unspecified blocked reason. |
SAFETY |
Candidates blocked due to safety. |
OTHER |
Candidates blocked due to other reason. |
BLOCKLIST |
Candidates blocked due to the terms which are included from the terminology blocklist. |
PROHIBITED_CONTENT |
Candidates blocked due to prohibited content. |
UsageMetadata
Usage metadata about response(s).
prompt_token_count
int32
Number of tokens in the request. When cached_content
is set, this is still the total effective prompt size meaning this includes the number of tokens in the cached content.
candidates_token_count
int32
Number of tokens in the response(s).
total_token_count
int32
Total token count for prompt and response candidates.
GenerationConfig
Generation config.
stop_sequences[]
string
Optional. Stop sequences.
response_mime_type
string
Optional. Output response mimetype of the generated candidate text. Supported mimetype: - text/plain
: (default) Text output. - application/json
: JSON response in the candidates. The model needs to be prompted to output the appropriate response type, otherwise the behavior is undefined. This is a preview feature.
temperature
float
Optional. Controls the randomness of predictions.
top_p
float
Optional. If specified, nucleus sampling will be used.
top_k
float
Optional. If specified, top-k sampling will be used.
candidate_count
int32
Optional. Number of candidates to generate.
max_output_tokens
int32
Optional. The maximum number of output tokens to generate per message.
response_logprobs
bool
Optional. If true, export the logprobs results in response.
logprobs
int32
Optional. Logit probabilities.
presence_penalty
float
Optional. Positive penalties.
frequency_penalty
float
Optional. Frequency penalties.
seed
int32
Optional. Seed.
Optional. The Schema
object allows the definition of input and output data types. These types can be objects, but also primitives and arrays. Represents a select subset of an OpenAPI 3.0 schema object. If set, a compatible response_mime_type must also be set. Compatible mimetypes: application/json
: Schema for JSON response.
GenericOperationMetadata
Generic Metadata shared by all operations.
Output only. Partial failures encountered. E.g. single files that couldn't be read. This field should never exceed 20 entries. Status details field will contain standard Google Cloud error details.
Output only. Time when the operation was created.
Output only. Time when the operation was updated for the last time. If the operation has finished (successfully or not), this is the finish time.
GenieSource
Contains information about the source of the models generated from Generative AI Studio.
base_model_uri
string
Required. The public base model URI.
GetAnnotationSpecRequest
Request message for DatasetService.GetAnnotationSpec
.
name
string
Required. The name of the AnnotationSpec resource. Format: projects/{project}/locations/{location}/datasets/{dataset}/annotationSpecs/{annotation_spec}
Mask specifying which fields to read.
GetArtifactRequest
Request message for MetadataService.GetArtifact
.
name
string
Required. The resource name of the Artifact to retrieve. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/artifacts/{artifact}
GetBatchPredictionJobRequest
Request message for JobService.GetBatchPredictionJob
.
name
string
Required. The name of the BatchPredictionJob resource. Format: projects/{project}/locations/{location}/batchPredictionJobs/{batch_prediction_job}
GetContextRequest
Request message for MetadataService.GetContext
.
name
string
Required. The resource name of the Context to retrieve. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/contexts/{context}
GetCustomJobRequest
Request message for JobService.GetCustomJob
.
name
string
Required. The name of the CustomJob resource. Format: projects/{project}/locations/{location}/customJobs/{custom_job}
GetDatasetRequest
Request message for DatasetService.GetDataset
. Next ID: 4
name
string
Required. The name of the Dataset resource.
Mask specifying which fields to read.
GetDatasetVersionRequest
Request message for DatasetService.GetDatasetVersion
. Next ID: 4
name
string
Required. The resource name of the Dataset version to delete. Format: projects/{project}/locations/{location}/datasets/{dataset}/datasetVersions/{dataset_version}
Mask specifying which fields to read.
GetDeploymentResourcePoolRequest
Request message for GetDeploymentResourcePool method.
name
string
Required. The name of the DeploymentResourcePool to retrieve. Format: projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}
GetEndpointRequest
Request message for EndpointService.GetEndpoint
name
string
Required. The name of the Endpoint resource. Format: projects/{project}/locations/{location}/endpoints/{endpoint}
GetEntityTypeRequest
Request message for FeaturestoreService.GetEntityType
.
name
string
Required. The name of the EntityType resource. Format: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entity_type}
GetExecutionRequest
Request message for MetadataService.GetExecution
.
name
string
Required. The resource name of the Execution to retrieve. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/executions/{execution}
GetFeatureGroupRequest
Request message for FeatureRegistryService.GetFeatureGroup
.
name
string
Required. The name of the FeatureGroup resource.
GetFeatureOnlineStoreRequest
Request message for FeatureOnlineStoreAdminService.GetFeatureOnlineStore
.
name
string
Required. The name of the FeatureOnlineStore resource.
GetFeatureRequest
Request message for FeaturestoreService.GetFeature
. Request message for FeatureRegistryService.GetFeature
.
name
string
Required. The name of the Feature resource. Format for entity_type as parent: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entity_type}
Format for feature_group as parent: projects/{project}/locations/{location}/featureGroups/{feature_group}
GetFeatureViewRequest
Request message for FeatureOnlineStoreAdminService.GetFeatureView
.
name
string
Required. The name of the FeatureView resource. Format: projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}/featureViews/{feature_view}
GetFeatureViewSyncRequest
Request message for FeatureOnlineStoreAdminService.GetFeatureViewSync
.
name
string
Required. The name of the FeatureViewSync resource. Format: projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}/featureViews/{feature_view}/featureViewSyncs/{feature_view_sync}
GetFeaturestoreRequest
Request message for FeaturestoreService.GetFeaturestore
.
name
string
Required. The name of the Featurestore resource.
GetHyperparameterTuningJobRequest
Request message for JobService.GetHyperparameterTuningJob
.
name
string
Required. The name of the HyperparameterTuningJob resource. Format: projects/{project}/locations/{location}/hyperparameterTuningJobs/{hyperparameter_tuning_job}
GetIndexEndpointRequest
Request message for IndexEndpointService.GetIndexEndpoint
name
string
Required. The name of the IndexEndpoint resource. Format: projects/{project}/locations/{location}/indexEndpoints/{index_endpoint}
GetIndexRequest
Request message for IndexService.GetIndex
name
string
Required. The name of the Index resource. Format: projects/{project}/locations/{location}/indexes/{index}
GetMetadataSchemaRequest
Request message for MetadataService.GetMetadataSchema
.
name
string
Required. The resource name of the MetadataSchema to retrieve. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/metadataSchemas/{metadataschema}
GetMetadataStoreRequest
Request message for MetadataService.GetMetadataStore
.
name
string
Required. The resource name of the MetadataStore to retrieve. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}
GetModelDeploymentMonitoringJobRequest
Request message for JobService.GetModelDeploymentMonitoringJob
.
name
string
Required. The resource name of the ModelDeploymentMonitoringJob. Format: projects/{project}/locations/{location}/modelDeploymentMonitoringJobs/{model_deployment_monitoring_job}
GetModelEvaluationRequest
Request message for ModelService.GetModelEvaluation
.
name
string
Required. The name of the ModelEvaluation resource. Format: projects/{project}/locations/{location}/models/{model}/evaluations/{evaluation}
GetModelEvaluationSliceRequest
Request message for ModelService.GetModelEvaluationSlice
.
name
string
Required. The name of the ModelEvaluationSlice resource. Format: projects/{project}/locations/{location}/models/{model}/evaluations/{evaluation}/slices/{slice}
GetModelRequest
Request message for ModelService.GetModel
.
name
string
Required. The name of the Model resource. Format: projects/{project}/locations/{location}/models/{model}
In order to retrieve a specific version of the model, also provide the version ID or version alias. Example: projects/{project}/locations/{location}/models/{model}@2
or projects/{project}/locations/{location}/models/{model}@golden
If no version ID or alias is specified, the "default" version will be returned. The "default" version alias is created for the first version of the model, and can be moved to other versions later on. There will be exactly one default version.
GetNasJobRequest
Request message for JobService.GetNasJob
.
name
string
Required. The name of the NasJob resource. Format: projects/{project}/locations/{location}/nasJobs/{nas_job}
GetNasTrialDetailRequest
Request message for JobService.GetNasTrialDetail
.
name
string
Required. The name of the NasTrialDetail resource. Format: projects/{project}/locations/{location}/nasJobs/{nas_job}/nasTrialDetails/{nas_trial_detail}
GetNotebookExecutionJobRequest
Request message for [NotebookService.GetNotebookExecutionJob]
name
string
Required. The name of the NotebookExecutionJob resource.
Optional. The NotebookExecutionJob view. Defaults to BASIC.
GetNotebookRuntimeRequest
Request message for NotebookService.GetNotebookRuntime
name
string
Required. The name of the NotebookRuntime resource. Instead of checking whether the name is in valid NotebookRuntime resource name format, directly throw NotFound exception if there is no such NotebookRuntime in spanner.
GetNotebookRuntimeTemplateRequest
Request message for NotebookService.GetNotebookRuntimeTemplate
name
string
Required. The name of the NotebookRuntimeTemplate resource. Format: projects/{project}/locations/{location}/notebookRuntimeTemplates/{notebook_runtime_template}
GetPersistentResourceRequest
Request message for PersistentResourceService.GetPersistentResource
.
name
string
Required. The name of the PersistentResource resource. Format: projects/{project_id_or_number}/locations/{location_id}/persistentResources/{persistent_resource_id}
GetPipelineJobRequest
Request message for PipelineService.GetPipelineJob
.
name
string
Required. The name of the PipelineJob resource. Format: projects/{project}/locations/{location}/pipelineJobs/{pipeline_job}
GetPublisherModelRequest
Request message for ModelGardenService.GetPublisherModel
name
string
Required. The name of the PublisherModel resource. Format: publishers/{publisher}/models/{publisher_model}
language_code
string
Optional. The IETF BCP-47 language code representing the language in which the publisher model's text information should be written in.
Optional. PublisherModel view specifying which fields to read.
is_hugging_face_model
bool
Optional. Boolean indicates whether the requested model is a Hugging Face model.
GetScheduleRequest
Request message for ScheduleService.GetSchedule
.
name
string
Required. The name of the Schedule resource. Format: projects/{project}/locations/{location}/schedules/{schedule}
GetSpecialistPoolRequest
Request message for SpecialistPoolService.GetSpecialistPool
.
name
string
Required. The name of the SpecialistPool resource. The form is projects/{project}/locations/{location}/specialistPools/{specialist_pool}
.
GetStudyRequest
Request message for VizierService.GetStudy
.
name
string
Required. The name of the Study resource. Format: projects/{project}/locations/{location}/studies/{study}
GetTensorboardExperimentRequest
Request message for TensorboardService.GetTensorboardExperiment
.
name
string
Required. The name of the TensorboardExperiment resource. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}
GetTensorboardRequest
Request message for TensorboardService.GetTensorboard
.
name
string
Required. The name of the Tensorboard resource. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
GetTensorboardRunRequest
Request message for TensorboardService.GetTensorboardRun
.
name
string
Required. The name of the TensorboardRun resource. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}/runs/{run}
GetTensorboardTimeSeriesRequest
Request message for TensorboardService.GetTensorboardTimeSeries
.
name
string
Required. The name of the TensorboardTimeSeries resource. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}/runs/{run}/timeSeries/{time_series}
GetTrainingPipelineRequest
Request message for PipelineService.GetTrainingPipeline
.
name
string
Required. The name of the TrainingPipeline resource. Format: projects/{project}/locations/{location}/trainingPipelines/{training_pipeline}
GetTrialRequest
Request message for VizierService.GetTrial
.
name
string
Required. The name of the Trial resource. Format: projects/{project}/locations/{location}/studies/{study}/trials/{trial}
GetTuningJobRequest
Request message for GenAiTuningService.GetTuningJob
.
name
string
Required. The name of the TuningJob resource. Format: projects/{project}/locations/{location}/tuningJobs/{tuning_job}
GoogleSearchRetrieval
Tool to retrieve public web data for grounding, powered by Google.
Specifies the dynamic retrieval configuration for the given source.
GroundednessInput
Input for groundedness metric.
Required. Spec for groundedness metric.
Required. Groundedness instance.
GroundednessInstance
Spec for groundedness instance.
prediction
string
Required. Output of the evaluated model.
context
string
Required. Background information provided in context used to compare against the prediction.
GroundednessResult
Spec for groundedness result.
explanation
string
Output only. Explanation for groundedness score.
score
float
Output only. Groundedness score.
confidence
float
Output only. Confidence for groundedness score.
GroundednessSpec
Spec for groundedness metric.
version
int32
Optional. Which version to use for evaluation.
GroundingChunk
Grounding chunk.
chunk_type
. Chunk type. chunk_type
can be only one of the following:Grounding chunk from the web.
Grounding chunk from context retrieved by the retrieval tools.
RetrievedContext
Chunk from context retrieved by the retrieval tools.
uri
string
URI reference of the attribution.
title
string
Title of the attribution.
Web
Chunk from the web.
uri
string
URI reference of the chunk.
title
string
Title of the chunk.
GroundingMetadata
Metadata returned to client when grounding is enabled.
web_search_queries[]
string
Optional. Web search queries for the following-up web search.
List of supporting references retrieved from specified grounding source.
Optional. List of grounding support.
Optional. Google search entry for the following-up web searches.
Optional. Output only. Retrieval metadata.
GroundingSupport
Grounding support.
grounding_chunk_indices[]
int32
A list of indices (into 'grounding_chunk') specifying the citations associated with the claim. For instance [1,3,4] means that grounding_chunk[1], grounding_chunk[3], grounding_chunk[4] are the retrieved content attributed to the claim.
confidence_scores[]
float
Confidence score of the support references. Ranges from 0 to 1. 1 is the most confident. This list must have the same size as the grounding_chunk_indices.
Segment of the content this support belongs to.
HarmCategory
Harm categories that will block the content.
Enums | |
---|---|
HARM_CATEGORY_UNSPECIFIED |
The harm category is unspecified. |
HARM_CATEGORY_HATE_SPEECH |
The harm category is hate speech. |
HARM_CATEGORY_DANGEROUS_CONTENT |
The harm category is dangerous content. |
HARM_CATEGORY_HARASSMENT |
The harm category is harassment. |
HARM_CATEGORY_SEXUALLY_EXPLICIT |
The harm category is sexually explicit content. |
HARM_CATEGORY_CIVIC_INTEGRITY |
The harm category is civic integrity. |
HyperparameterTuningJob
Represents a HyperparameterTuningJob. A HyperparameterTuningJob has a Study specification and multiple CustomJobs with identical CustomJob specification.
name
string
Output only. Resource name of the HyperparameterTuningJob.
display_name
string
Required. The display name of the HyperparameterTuningJob. The name can be up to 128 characters long and can consist of any UTF-8 characters.
Required. Study configuration of the HyperparameterTuningJob.
max_trial_count
int32
Required. The desired total number of Trials.
parallel_trial_count
int32
Required. The desired number of Trials to run in parallel.
max_failed_trial_count
int32
The number of failed Trials that need to be seen before failing the HyperparameterTuningJob.
If set to 0, Vertex AI decides how many Trials must fail before the whole job fails.
Required. The spec of a trial job. The same spec applies to the CustomJobs created in all the trials.
Output only. Trials of the HyperparameterTuningJob.
Output only. The detailed state of the job.
Output only. Time when the HyperparameterTuningJob was created.
Output only. Time when the HyperparameterTuningJob for the first time entered the JOB_STATE_RUNNING
state.
Output only. Time when the HyperparameterTuningJob entered any of the following states: JOB_STATE_SUCCEEDED
, JOB_STATE_FAILED
, JOB_STATE_CANCELLED
.
Output only. Time when the HyperparameterTuningJob was most recently updated.
Output only. Only populated when job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.
labels
map<string, string>
The labels with user-defined metadata to organize HyperparameterTuningJobs.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information and examples of labels.
Customer-managed encryption key options for a HyperparameterTuningJob. If this is set, then all resources created by the HyperparameterTuningJob will be encrypted with the provided encryption key.
satisfies_pzs
bool
Output only. Reserved for future use.
satisfies_pzi
bool
Output only. Reserved for future use.
IdMatcher
Matcher for Features of an EntityType by Feature ID.
ids[]
string
Required. The following are accepted as ids
:
- A single-element list containing only
*
, which selects all Features in the target EntityType, or - A list containing only Feature IDs, which selects only Features with those IDs in the target EntityType.
ImportDataConfig
Describes the location from where we import data into a Dataset, together with the labels that will be applied to the DataItems and the Annotations.
data_item_labels
map<string, string>
Labels that will be applied to newly imported DataItems. If an identical DataItem as one being imported already exists in the Dataset, then these labels will be appended to these of the already existing one, and if labels with identical key is imported before, the old label value will be overwritten. If two DataItems are identical in the same import data operation, the labels will be combined and if key collision happens in this case, one of the values will be picked randomly. Two DataItems are considered identical if their content bytes are identical (e.g. image bytes or pdf bytes). These labels will be overridden by Annotation labels specified inside index file referenced by import_schema_uri
, e.g. jsonl file.
annotation_labels
map<string, string>
Labels that will be applied to newly imported Annotations. If two Annotations are identical, one of them will be deduped. Two Annotations are considered identical if their payload
, payload_schema_uri
and all of their labels
are the same. These labels will be overridden by Annotation labels specified inside index file referenced by import_schema_uri
, e.g. jsonl file.
import_schema_uri
string
Required. Points to a YAML file stored on Google Cloud Storage describing the import format. Validation will be done against the schema. The schema is defined as an OpenAPI 3.0.2 Schema Object.
source
. The source of the input. source
can be only one of the following:The Google Cloud Storage location for the input content.
ImportDataOperationMetadata
Runtime operation information for DatasetService.ImportData
.
The common part of the operation metadata.
ImportDataRequest
Request message for DatasetService.ImportData
.
name
string
Required. The name of the Dataset resource. Format: projects/{project}/locations/{location}/datasets/{dataset}
Required. The desired input locations. The contents of all input locations will be imported in one batch.
ImportDataResponse
This type has no fields.
Response message for DatasetService.ImportData
.
ImportFeatureValuesOperationMetadata
Details of operations that perform import Feature values.
Operation metadata for Featurestore import Feature values.
imported_entity_count
int64
Number of entities that have been imported by the operation.
imported_feature_value_count
int64
Number of Feature values that have been imported by the operation.
source_uris[]
string
The source URI from where Feature values are imported.
invalid_row_count
int64
The number of rows in input source that weren't imported due to either * Not having any featureValues. * Having a null entityId. * Having a null timestamp. * Not being parsable (applicable for CSV sources).
timestamp_outside_retention_rows_count
int64
The number rows that weren't ingested due to having timestamps outside the retention boundary.
blocking_operation_ids[]
int64
List of ImportFeatureValues operations running under a single EntityType that are blocking this operation.
ImportFeatureValuesRequest
Request message for FeaturestoreService.ImportFeatureValues
.
entity_type
string
Required. The resource name of the EntityType grouping the Features for which values are being imported. Format: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entityType}
entity_id_field
string
Source column that holds entity IDs. If not provided, entity IDs are extracted from the column named entity_id.
Required. Specifications defining which Feature values to import from the entity. The request fails if no feature_specs are provided, and having multiple feature_specs for one Feature is not allowed.
disable_online_serving
bool
If set, data will not be imported for online serving. This is typically used for backfilling, where Feature generation timestamps are not in the timestamp range needed for online serving.
worker_count
int32
Specifies the number of workers that are used to write data to the Featurestore. Consider the online serving capacity that you require to achieve the desired import throughput without interfering with online serving. The value must be positive, and less than or equal to 100. If not set, defaults to using 1 worker. The low count ensures minimal impact on online serving performance.
disable_ingestion_analysis
bool
If true, API doesn't start ingestion analysis pipeline.
source
. Details about the source data, including the location of the storage and the format. source
can be only one of the following:feature_time_source
. Source of Feature timestamp for all Feature values of each entity. Timestamps must be millisecond-aligned. feature_time_source
can be only one of the following:feature_time_field
string
Source column that holds the Feature timestamp for all Feature values in each entity.
Single Feature timestamp for all entities being imported. The timestamp must not have higher than millisecond precision.
FeatureSpec
Defines the Feature value(s) to import.
id
string
Required. ID of the Feature to import values of. This Feature must exist in the target EntityType, or the request will fail.
source_field
string
Source column to get the Feature values from. If not set, uses the column with the same name as the Feature ID.
ImportFeatureValuesResponse
Response message for FeaturestoreService.ImportFeatureValues
.
imported_entity_count
int64
Number of entities that have been imported by the operation.
imported_feature_value_count
int64
Number of Feature values that have been imported by the operation.
invalid_row_count
int64
The number of rows in input source that weren't imported due to either * Not having any featureValues. * Having a null entityId. * Having a null timestamp. * Not being parsable (applicable for CSV sources).
timestamp_outside_retention_rows_count
int64
The number rows that weren't ingested due to having feature timestamps outside the retention boundary.
ImportModelEvaluationRequest
Request message for ModelService.ImportModelEvaluation
parent
string
Required. The name of the parent model resource. Format: projects/{project}/locations/{location}/models/{model}
Required. Model evaluation resource to be imported.
Index
A representation of a collection of database items organized in a way that allows for approximate nearest neighbor (a.k.a ANN) algorithms search.
name
string
Output only. The resource name of the Index.
display_name
string
Required. The display name of the Index. The name can be up to 128 characters long and can consist of any UTF-8 characters.
description
string
The description of the Index.
metadata_schema_uri
string
Immutable. Points to a YAML file stored on Google Cloud Storage describing additional information about the Index, that is specific to it. Unset if the Index does not have any additional information. The schema is defined as an OpenAPI 3.0.2 Schema Object. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
An additional information about the Index; the schema of the metadata can be found in metadata_schema
.
Output only. The pointers to DeployedIndexes created from this Index. An Index can be only deleted if all its DeployedIndexes had been undeployed first.
etag
string
Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
labels
map<string, string>
The labels with user-defined metadata to organize your Indexes.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information and examples of labels.
Output only. Timestamp when this Index was created.
Output only. Timestamp when this Index was most recently updated. This also includes any update to the contents of the Index. Note that Operations working on this Index may have their Operations.metadata.generic_metadata.update_time
a little after the value of this timestamp, yet that does not mean their results are not already reflected in the Index. Result of any successfully completed Operation on the Index is reflected in it.
Output only. Stats of the index resource.
Immutable. The update method to use with this Index. If not set, BATCH_UPDATE will be used by default.
Immutable. Customer-managed encryption key spec for an Index. If set, this Index and all sub-resources of this Index will be secured by this key.
satisfies_pzs
bool
Output only. Reserved for future use.
satisfies_pzi
bool
Output only. Reserved for future use.
IndexUpdateMethod
The update method of an Index.
Enums | |
---|---|
INDEX_UPDATE_METHOD_UNSPECIFIED |
Should not be used. |
BATCH_UPDATE |
BatchUpdate: user can call UpdateIndex with files on Cloud Storage of Datapoints to update. |
STREAM_UPDATE |
StreamUpdate: user can call UpsertDatapoints/DeleteDatapoints to update the Index and the updates will be applied in corresponding DeployedIndexes in nearly real-time. |
IndexDatapoint
A datapoint of Index.
datapoint_id
string
Required. Unique identifier of the datapoint.
feature_vector[]
float
Required. Feature embedding vector for dense index. An array of numbers with the length of [NearestNeighborSearchConfig.dimensions].
Optional. Feature embedding vector for sparse index.
Optional. List of Restrict of the datapoint, used to perform "restricted searches" where boolean rule are used to filter the subset of the database eligible for matching. This uses categorical tokens. See: https://cloud.google.com/vertex-ai/docs/matching-engine/filtering
Optional. List of Restrict of the datapoint, used to perform "restricted searches" where boolean rule are used to filter the subset of the database eligible for matching. This uses numeric comparisons.
Optional. CrowdingTag of the datapoint, the number of neighbors to return in each crowding can be configured during query.
CrowdingTag
Crowding tag is a constraint on a neighbor list produced by nearest neighbor search requiring that no more than some value k' of the k neighbors returned have the same value of crowding_attribute.
crowding_attribute
string
The attribute value used for crowding. The maximum number of neighbors to return per crowding attribute value (per_crowding_attribute_num_neighbors) is configured per-query. This field is ignored if per_crowding_attribute_num_neighbors is larger than the total number of neighbors to return for a given query.
NumericRestriction
This field allows restricts to be based on numeric comparisons rather than categorical tokens.
namespace
string
The namespace of this restriction. e.g.: cost.
This MUST be specified for queries and must NOT be specified for datapoints.
Value
. The type of Value must be consistent for all datapoints with a given namespace name. This is verified at runtime. Value
can be only one of the following:value_int
int64
Represents 64 bit integer.
value_float
float
Represents 32 bit float.
value_double
double
Represents 64 bit float.
Operator
Which comparison operator to use. Should be specified for queries only; specifying this for a datapoint is an error.
Datapoints for which Operator is true relative to the query's Value field will be allowlisted.
Enums | |
---|---|
OPERATOR_UNSPECIFIED |
Default value of the enum. |
LESS |
Datapoints are eligible iff their value is < the query's. |
LESS_EQUAL |
Datapoints are eligible iff their value is <= the query's. |
EQUAL |
Datapoints are eligible iff their value is == the query's. |
GREATER_EQUAL |
Datapoints are eligible iff their value is >= the query's. |
GREATER |
Datapoints are eligible iff their value is > the query's. |
NOT_EQUAL |
Datapoints are eligible iff their value is != the query's. |
Restriction
Restriction of a datapoint which describe its attributes(tokens) from each of several attribute categories(namespaces).
namespace
string
The namespace of this restriction. e.g.: color.
allow_list[]
string
The attributes to allow in this namespace. e.g.: 'red'
deny_list[]
string
The attributes to deny in this namespace. e.g.: 'blue'
SparseEmbedding
Feature embedding vector for sparse index. An array of numbers whose values are located in the specified dimensions.
values[]
float
Required. The list of embedding values of the sparse vector.
dimensions[]
int64
Required. The list of indexes for the embedding values of the sparse vector.
IndexEndpoint
Indexes are deployed into it. An IndexEndpoint can have multiple DeployedIndexes.
name
string
Output only. The resource name of the IndexEndpoint.
display_name
string
Required. The display name of the IndexEndpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters.
description
string
The description of the IndexEndpoint.
Output only. The indexes deployed in this endpoint.
etag
string
Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
labels
map<string, string>
The labels with user-defined metadata to organize your IndexEndpoints.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information and examples of labels.
Output only. Timestamp when this IndexEndpoint was created.
Output only. Timestamp when this IndexEndpoint was last updated. This timestamp is not updated when the endpoint's DeployedIndexes are updated, e.g. due to updates of the original Indexes they are the deployments of.
network
string
Optional. The full name of the Google Compute Engine network to which the IndexEndpoint should be peered.
Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network.
network
and private_service_connect_config
are mutually exclusive.
Format: projects/{project}/global/networks/{network}
. Where {project} is a project number, as in '12345', and {network} is network name.
enable_private_service_connect
(deprecated)
bool
Optional. Deprecated: If true, expose the IndexEndpoint via private service connect.
Only one of the fields, network
or enable_private_service_connect
, can be set.
Optional. Configuration for private service connect.
network
and private_service_connect_config
are mutually exclusive.
public_endpoint_enabled
bool
Optional. If true, the deployed index will be accessible through public endpoint.
public_endpoint_domain_name
string
Output only. If public_endpoint_enabled
is true, this field will be populated with the domain name to use for this index endpoint.
Immutable. Customer-managed encryption key spec for an IndexEndpoint. If set, this IndexEndpoint and all sub-resources of this IndexEndpoint will be secured by this key.
satisfies_pzs
bool
Output only. Reserved for future use.
satisfies_pzi
bool
Output only. Reserved for future use.
IndexPrivateEndpoints
IndexPrivateEndpoints proto is used to provide paths for users to send requests via private endpoints (e.g. private service access, private service connect). To send request via private service access, use match_grpc_address. To send request via private service connect, use service_attachment.
match_grpc_address
string
Output only. The ip address used to send match gRPC requests.
service_attachment
string
Output only. The name of the service attachment resource. Populated if private service connect is enabled.
Output only. PscAutomatedEndpoints is populated if private service connect is enabled if PscAutomatedConfig is set.
IndexStats
Stats of the Index.
vectors_count
int64
Output only. The number of dense vectors in the Index.
sparse_vectors_count
int64
Output only. The number of sparse vectors in the Index.
shards_count
int32
Output only. The number of shards in the Index.
InputDataConfig
Specifies Vertex AI owned input data to be used for training, and possibly evaluating, the Model.
dataset_id
string
Required. The ID of the Dataset in the same Project and Location which data will be used to train the Model. The Dataset must use schema compatible with Model being trained, and what is compatible should be described in the used TrainingPipeline's training_task_definition
. For tabular Datasets, all their data is exported to training, to pick and choose from.
annotations_filter
string
Applicable only to Datasets that have DataItems and Annotations.
A filter on Annotations of the Dataset. Only Annotations that both match this filter and belong to DataItems not ignored by the split method are used in respectively training, validation or test role, depending on the role of the DataItem they are on (for the auto-assigned that role is decided by Vertex AI). A filter with same syntax as the one used in ListAnnotations
may be used, but note here it filters across all Annotations of the Dataset, and not just within a single DataItem.
annotation_schema_uri
string
Applicable only to custom training with Datasets that have DataItems and Annotations.
Cloud Storage URI that points to a YAML file describing the annotation schema. The schema is defined as an OpenAPI 3.0.2 Schema Object. The schema files that can be used here are found in gs://google-cloud-aiplatform/schema/dataset/annotation/ , note that the chosen schema must be consistent with metadata
of the Dataset specified by dataset_id
.
Only Annotations that both match this schema and belong to DataItems not ignored by the split method are used in respectively training, validation or test role, depending on the role of the DataItem they are on.
When used in conjunction with annotations_filter
, the Annotations used for training are filtered by both annotations_filter
and annotation_schema_uri
.
saved_query_id
string
Only applicable to Datasets that have SavedQueries.
The ID of a SavedQuery (annotation set) under the Dataset specified by dataset_id
used for filtering Annotations for training.
Only Annotations that are associated with this SavedQuery are used in respectively training. When used in conjunction with annotations_filter
, the Annotations used for training are filtered by both saved_query_id
and annotations_filter
.
Only one of saved_query_id
and annotation_schema_uri
should be specified as both of them represent the same thing: problem type.
persist_ml_use_assignment
bool
Whether to persist the ML use assignment to data item system labels.
split
. The instructions how the input data should be split between the training, validation and test sets. If no split type is provided, the fraction_split
is used by default. split
can be only one of the following:Split based on fractions defining the size of each set.
Split based on the provided filters for each set.
Supported only for tabular Datasets.
Split based on a predefined key.
Supported only for tabular Datasets.
Split based on the timestamp of the input data pieces.
Supported only for tabular Datasets.
Split based on the distribution of the specified column.
Union field destination
. Only applicable to Custom and Hyperparameter Tuning TrainingPipelines.
The destination of the training data to be written to.
Supported destination file formats: * For non-tabular data: "jsonl". * For tabular data: "csv" and "bigquery".
The following Vertex AI environment variables are passed to containers or python modules of the training task when this field is set:
- AIP_DATA_FORMAT : Exported data format.
- AIP_TRAINING_DATA_URI : Sharded exported training data uris.
- AIP_VALIDATION_DATA_URI : Sharded exported validation data uris.
- AIP_TEST_DATA_URI : Sharded exported test data uris.
destination
can be only one of the following:
The Cloud Storage location where the training data is to be written to. In the given directory a new directory is created with name: dataset-<dataset-id>-<annotation-type>-<timestamp-of-training-call>
where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. All training input data is written into that directory.
The Vertex AI environment variables representing Cloud Storage data URIs are represented in the Cloud Storage wildcard format to support sharded data. e.g.: "gs://.../training-*.jsonl"
- AIP_DATA_FORMAT = "jsonl" for non-tabular data, "csv" for tabular data
- AIP_TRAINING_DATA_URI = "gcs_destination/dataset-
- - AIP_VALIDATION_DATA_URI = "gcs_destination/dataset-
- - AIP_TEST_DATA_URI = "gcs_destination/dataset-
- -
Only applicable to custom training with tabular Dataset with BigQuery source.
The BigQuery project location where the training data is to be written to. In the given project a new dataset is created with name dataset_<dataset-id>_<annotation-type>_<timestamp-of-training-call>
where timestamp is in YYYY_MM_DDThh_mm_ss_sssZ format. All training input data is written into that dataset. In the dataset three tables are created, training
, validation
and test
.
- AIP_DATA_FORMAT = "bigquery".
- AIP_TRAINING_DATA_URI = "bigquery_destination.dataset_
_ _ AIP_VALIDATION_DATA_URI = "bigquery_destination.dataset_
_ _ AIP_TEST_DATA_URI = "bigquery_destination.dataset_
_ _
Int64Array
A list of int64 values.
values[]
int64
A list of int64 values.
IntegratedGradientsAttribution
An attribution method that computes the Aumann-Shapley value taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
step_count
int32
Required. The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is within the desired error range.
Valid range of its value is [1, 100], inclusively.
Config for SmoothGrad approximation of gradients.
When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf
Config for IG with blur baseline.
When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383
JobState
Describes the state of a job.
Enums | |
---|---|
JOB_STATE_UNSPECIFIED |
The job state is unspecified. |
JOB_STATE_QUEUED |
The job has been just created or resumed and processing has not yet begun. |
JOB_STATE_PENDING |
The service is preparing to run the job. |
JOB_STATE_RUNNING |
The job is in progress. |
JOB_STATE_SUCCEEDED |
The job completed successfully. |
JOB_STATE_FAILED |
The job failed. |
JOB_STATE_CANCELLING |
The job is being cancelled. From this state the job may only go to either JOB_STATE_SUCCEEDED , JOB_STATE_FAILED or JOB_STATE_CANCELLED . |
JOB_STATE_CANCELLED |
The job has been cancelled. |
JOB_STATE_PAUSED |
The job has been stopped, and can be resumed. |
JOB_STATE_EXPIRED |
The job has expired. |
JOB_STATE_UPDATING |
The job is being updated. Only jobs in the RUNNING state can be updated. After updating, the job goes back to the RUNNING state. |
JOB_STATE_PARTIALLY_SUCCEEDED |
The job is partially succeeded, some results may be missing due to errors. |
LargeModelReference
Contains information about the Large Model.
name
string
Required. The unique name of the large Foundation or pre-built model. Like "chat-bison", "text-bison". Or model name with version ID, like "chat-bison@001", "text-bison@005", etc.
LineageSubgraph
ListAnnotationsRequest
Request message for DatasetService.ListAnnotations
.
parent
string
Required. The resource name of the DataItem to list Annotations from. Format: projects/{project}/locations/{location}/datasets/{dataset}/dataItems/{data_item}
filter
string
The standard list filter.
page_size
int32
The standard list page size.
page_token
string
The standard list page token.
Mask specifying which fields to read.
order_by
string
A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending.
ListAnnotationsResponse
Response message for DatasetService.ListAnnotations
.
A list of Annotations that matches the specified filter in the request.
next_page_token
string
The standard List next-page token.
ListArtifactsRequest
Request message for MetadataService.ListArtifacts
.
parent
string
Required. The MetadataStore whose Artifacts should be listed. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}
page_size
int32
The maximum number of Artifacts to return. The service may return fewer. Must be in range 1-1000, inclusive. Defaults to 100.
page_token
string
A page token, received from a previous MetadataService.ListArtifacts
call. Provide this to retrieve the subsequent page.
When paginating, all other provided parameters must match the call that provided the page token. (Otherwise the request will fail with INVALID_ARGUMENT error.)
filter
string
Filter specifying the boolean condition for the Artifacts to satisfy in order to be part of the result set. The syntax to define filter query is based on https://google.aip.dev/160. The supported set of filters include the following:
- Attribute filtering: For example:
display_name = "test"
. Supported fields include:name
,display_name
,uri
,state
,schema_title
,create_time
, andupdate_time
. Time fields, such ascreate_time
andupdate_time
, require values specified in RFC-3339 format. For example:create_time = "2020-11-19T11:30:00-04:00"
- Metadata field: To filter on metadata fields use traversal operation as follows:
metadata.<field_name>.<type_value>
. For example:metadata.field_1.number_value = 10.0
In case the field name contains special characters (such as colon), one can embed it inside double quote. For example:metadata."field:1".number_value = 10.0
- Context based filtering: To filter Artifacts based on the contexts to which they belong, use the function operator with the full resource name
in_context(<context-name>)
. For example:in_context("projects/<project_number>/locations/<location>/metadataStores/<metadatastore_name>/contexts/<context-id>")
Each of the above supported filter types can be combined together using logical operators (AND
& OR
). Maximum nested expression depth allowed is 5.
For example: display_name = "test" AND metadata.field1.bool_value = true
.
order_by
string
How the list of messages is ordered. Specify the values to order by and an ordering operation. The default sorting order is ascending. To specify descending order for a field, users append a " desc" suffix; for example: "foo desc, bar". Subfields are specified with a .
character, such as foo.bar. see https://google.aip.dev/132#ordering for more details.
ListArtifactsResponse
Response message for MetadataService.ListArtifacts
.
The Artifacts retrieved from the MetadataStore.
next_page_token
string
A token, which can be sent as ListArtifactsRequest.page_token
to retrieve the next page. If this field is not populated, there are no subsequent pages.
ListBatchPredictionJobsRequest
Request message for JobService.ListBatchPredictionJobs
.
parent
string
Required. The resource name of the Location to list the BatchPredictionJobs from. Format: projects/{project}/locations/{location}
filter
string
The standard list filter.
Supported fields:
display_name
supports=
,!=
comparisons, and:
wildcard.model_display_name
supports=
,!=
comparisons.state
supports=
,!=
comparisons.create_time
supports=
,!=
,<
,<=
,>
,>=
comparisons.create_time
must be in RFC 3339 format.labels
supports general map functions that is:labels.key=value
- key:value equality `labels.key:* - key existence
Some examples of using the filter are:
state="JOB_STATE_SUCCEEDED" AND display_name:"my_job_*"
state!="JOB_STATE_FAILED" OR display_name="my_job"
NOT display_name="my_job"
create_time>"2021-05-18T00:00:00Z"
labels.keyA=valueA
labels.keyB:*
page_size
int32
The standard list page size.
page_token
string
The standard list page token. Typically obtained via ListBatchPredictionJobsResponse.next_page_token
of the previous JobService.ListBatchPredictionJobs
call.
Mask specifying which fields to read.
ListBatchPredictionJobsResponse
Response message for JobService.ListBatchPredictionJobs
List of BatchPredictionJobs in the requested page.
next_page_token
string
A token to retrieve the next page of results. Pass to ListBatchPredictionJobsRequest.page_token
to obtain that page.
ListContextsRequest
Request message for MetadataService.ListContexts
parent
string
Required. The MetadataStore whose Contexts should be listed. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}
page_size
int32
The maximum number of Contexts to return. The service may return fewer. Must be in range 1-1000, inclusive. Defaults to 100.
page_token
string
A page token, received from a previous MetadataService.ListContexts
call. Provide this to retrieve the subsequent page.
When paginating, all other provided parameters must match the call that provided the page token. (Otherwise the request will fail with INVALID_ARGUMENT error.)
filter
string
Filter specifying the boolean condition for the Contexts to satisfy in order to be part of the result set. The syntax to define filter query is based on https://google.aip.dev/160. Following are the supported set of filters:
- Attribute filtering: For example:
display_name = "test"
. Supported fields include:name
,display_name
,schema_title
,create_time
, andupdate_time
. Time fields, such ascreate_time
andupdate_time
, require values specified in RFC-3339 format. For example:create_time = "2020-11-19T11:30:00-04:00"
. - Metadata field: To filter on metadata fields use traversal operation as follows:
metadata.<field_name>.<type_value>
. For example:metadata.field_1.number_value = 10.0
. In case the field name contains special characters (such as colon), one can embed it inside double quote. For example:metadata."field:1".number_value = 10.0
- Parent Child filtering: To filter Contexts based on parent-child relationship use the HAS operator as follows:
parent_contexts:
"projects/<project_number>/locations/<location>/metadataStores/<metadatastore_name>/contexts/<context_id>"
child_contexts:
"projects/<project_number>/locations/<location>/metadataStores/<metadatastore_name>/contexts/<context_id>"
Each of the above supported filters can be combined together using logical operators (AND
& OR
). Maximum nested expression depth allowed is 5.
For example: display_name = "test" AND metadata.field1.bool_value = true
.
order_by
string
How the list of messages is ordered. Specify the values to order by and an ordering operation. The default sorting order is ascending. To specify descending order for a field, users append a " desc" suffix; for example: "foo desc, bar". Subfields are specified with a .
character, such as foo.bar. see https://google.aip.dev/132#ordering for more details.
ListContextsResponse
Response message for MetadataService.ListContexts
.
The Contexts retrieved from the MetadataStore.
next_page_token
string
A token, which can be sent as ListContextsRequest.page_token
to retrieve the next page. If this field is not populated, there are no subsequent pages.
ListCustomJobsRequest
Request message for JobService.ListCustomJobs
.
parent
string
Required. The resource name of the Location to list the CustomJobs from. Format: projects/{project}/locations/{location}
filter
string
The standard list filter.
Supported fields:
display_name
supports=
,!=
comparisons, and:
wildcard.state
supports=
,!=
comparisons.create_time
supports=
,!=
,<
,<=
,>
,>=
comparisons.create_time
must be in RFC 3339 format.labels
supports general map functions that is:labels.key=value
- key:value equality `labels.key:* - key existence
Some examples of using the filter are:
state="JOB_STATE_SUCCEEDED" AND display_name:"my_job_*"
state!="JOB_STATE_FAILED" OR display_name="my_job"
NOT display_name="my_job"
create_time>"2021-05-18T00:00:00Z"
labels.keyA=valueA
labels.keyB:*
page_size
int32
The standard list page size.
page_token
string
The standard list page token. Typically obtained via ListCustomJobsResponse.next_page_token
of the previous JobService.ListCustomJobs
call.
Mask specifying which fields to read.
ListCustomJobsResponse
Response message for JobService.ListCustomJobs
List of CustomJobs in the requested page.
next_page_token
string
A token to retrieve the next page of results. Pass to ListCustomJobsRequest.page_token
to obtain that page.
ListDataItemsRequest
Request message for DatasetService.ListDataItems
.
parent
string
Required. The resource name of the Dataset to list DataItems from. Format: projects/{project}/locations/{location}/datasets/{dataset}
filter
string
The standard list filter.
page_size
int32
The standard list page size.
page_token
string
The standard list page token.
Mask specifying which fields to read.
order_by
string
A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending.
ListDataItemsResponse
Response message for DatasetService.ListDataItems
.
A list of DataItems that matches the specified filter in the request.
next_page_token
string
The standard List next-page token.
ListDatasetVersionsRequest
Request message for DatasetService.ListDatasetVersions
.
parent
string
Required. The resource name of the Dataset to list DatasetVersions from. Format: projects/{project}/locations/{location}/datasets/{dataset}
filter
string
Optional. The standard list filter.
page_size
int32
Optional. The standard list page size.
page_token
string
Optional. The standard list page token.
Optional. Mask specifying which fields to read.
order_by
string
Optional. A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending.
ListDatasetVersionsResponse
Response message for DatasetService.ListDatasetVersions
.
A list of DatasetVersions that matches the specified filter in the request.
next_page_token
string
The standard List next-page token.
ListDatasetsRequest
Request message for DatasetService.ListDatasets
.
parent
string
Required. The name of the Dataset's parent resource. Format: projects/{project}/locations/{location}
filter
string
An expression for filtering the results of the request. For field names both snake_case and camelCase are supported.
display_name
: supports = and !=metadata_schema_uri
: supports = and !=labels
supports general map functions that is:labels.key=value
- key:value equality- `labels.key:* or labels:key - key existence
- A key including a space must be quoted.
labels."a key"
.
Some examples:
displayName="myDisplayName"
labels.myKey="myValue"
page_size
int32
The standard list page size.
page_token
string
The standard list page token.
Mask specifying which fields to read.
order_by
string
A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending. Supported fields:
display_name
create_time
update_time
ListDatasetsResponse
Response message for DatasetService.ListDatasets
.
A list of Datasets that matches the specified filter in the request.
next_page_token
string
The standard List next-page token.
ListDeploymentResourcePoolsRequest
Request message for ListDeploymentResourcePools method.
parent
string
Required. The parent Location which owns this collection of DeploymentResourcePools. Format: projects/{project}/locations/{location}
page_size
int32
The maximum number of DeploymentResourcePools to return. The service may return fewer than this value.
page_token
string
A page token, received from a previous ListDeploymentResourcePools
call. Provide this to retrieve the subsequent page.
When paginating, all other parameters provided to ListDeploymentResourcePools
must match the call that provided the page token.
ListDeploymentResourcePoolsResponse
Response message for ListDeploymentResourcePools method.
The DeploymentResourcePools from the specified location.
next_page_token
string
A token, which can be sent as page_token
to retrieve the next page. If this field is omitted, there are no subsequent pages.
ListEndpointsRequest
Request message for EndpointService.ListEndpoints
.
parent
string
Required. The resource name of the Location from which to list the Endpoints. Format: projects/{project}/locations/{location}
filter
string
Optional. An expression for filtering the results of the request. For field names both snake_case and camelCase are supported.
endpoint
supports=
and!=
.endpoint
represents the Endpoint ID, i.e. the last segment of the Endpoint'sresource name
.display_name
supports=
and!=
.labels
supports general map functions that is:labels.key=value
- key:value equalitylabels.key:*
orlabels:key
- key existence- A key including a space must be quoted.
labels."a key"
.
base_model_name
only supports=
.
Some examples:
endpoint=1
displayName="myDisplayName"
labels.myKey="myValue"
baseModelName="text-bison"
page_size
int32
Optional. The standard list page size.
page_token
string
Optional. The standard list page token. Typically obtained via ListEndpointsResponse.next_page_token
of the previous EndpointService.ListEndpoints
call.
Optional. Mask specifying which fields to read.
order_by
string
A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending. Supported fields:
display_name
create_time
update_time
Example: display_name, create_time desc
.
ListEndpointsResponse
Response message for EndpointService.ListEndpoints
.
List of Endpoints in the requested page.
next_page_token
string
A token to retrieve the next page of results. Pass to ListEndpointsRequest.page_token
to obtain that page.
ListEntityTypesRequest
Request message for FeaturestoreService.ListEntityTypes
.
parent
string
Required. The resource name of the Featurestore to list EntityTypes. Format: projects/{project}/locations/{location}/featurestores/{featurestore}
filter
string
Lists the EntityTypes that match the filter expression. The following filters are supported:
create_time
: Supports=
,!=
,<
,>
,>=
, and<=
comparisons. Values must be in RFC 3339 format.update_time
: Supports=
,!=
,<
,>
,>=
, and<=
comparisons. Values must be in RFC 3339 format.labels
: Supports key-value equality as well as key presence.
Examples:
create_time > \"2020-01-31T15:30:00.000000Z\" OR update_time > \"2020-01-31T15:30:00.000000Z\"
--> EntityTypes created or updated after 2020-01-31T15:30:00.000000Z.labels.active = yes AND labels.env = prod
--> EntityTypes having both (active: yes) and (env: prod) labels.labels.env: *
--> Any EntityType which has a label with 'env' as the key.
page_size
int32
The maximum number of EntityTypes to return. The service may return fewer than this value. If unspecified, at most 1000 EntityTypes will be returned. The maximum value is 1000; any value greater than 1000 will be coerced to 1000.
page_token
string
A page token, received from a previous FeaturestoreService.ListEntityTypes
call. Provide this to retrieve the subsequent page.
When paginating, all other parameters provided to FeaturestoreService.ListEntityTypes
must match the call that provided the page token.
order_by
string
A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending.
Supported fields:
entity_type_id
create_time
update_time
Mask specifying which fields to read.
ListEntityTypesResponse
Response message for FeaturestoreService.ListEntityTypes
.
The EntityTypes matching the request.
next_page_token
string
A token, which can be sent as ListEntityTypesRequest.page_token
to retrieve the next page. If this field is omitted, there are no subsequent pages.
ListExecutionsRequest
Request message for MetadataService.ListExecutions
.
parent
string
Required. The MetadataStore whose Executions should be listed. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}
page_size
int32
The maximum number of Executions to return. The service may return fewer. Must be in range 1-1000, inclusive. Defaults to 100.
page_token
string
A page token, received from a previous MetadataService.ListExecutions
call. Provide this to retrieve the subsequent page.
When paginating, all other provided parameters must match the call that provided the page token. (Otherwise the request will fail with an INVALID_ARGUMENT error.)
filter
string
Filter specifying the boolean condition for the Executions to satisfy in order to be part of the result set. The syntax to define filter query is based on https://google.aip.dev/160. Following are the supported set of filters:
- Attribute filtering: For example:
display_name = "test"
. Supported fields include:name
,display_name
,state
,schema_title
,create_time
, andupdate_time
. Time fields, such ascreate_time
andupdate_time
, require values specified in RFC-3339 format. For example:create_time = "2020-11-19T11:30:00-04:00"
. - Metadata field: To filter on metadata fields use traversal operation as follows:
metadata.<field_name>.<type_value>
For example:metadata.field_1.number_value = 10.0
In case the field name contains special characters (such as colon), one can embed it inside double quote. For example:metadata."field:1".number_value = 10.0
- Context based filtering: To filter Executions based on the contexts to which they belong use the function operator with the full resource name:
in_context(<context-name>)
. For example:in_context("projects/<project_number>/locations/<location>/metadataStores/<metadatastore_name>/contexts/<context-id>")
Each of the above supported filters can be combined together using logical operators (AND
& OR
). Maximum nested expression depth allowed is 5.
For example: display_name = "test" AND metadata.field1.bool_value = true
.
order_by
string
How the list of messages is ordered. Specify the values to order by and an ordering operation. The default sorting order is ascending. To specify descending order for a field, users append a " desc" suffix; for example: "foo desc, bar". Subfields are specified with a .
character, such as foo.bar. see https://google.aip.dev/132#ordering for more details.
ListExecutionsResponse
Response message for MetadataService.ListExecutions
.
The Executions retrieved from the MetadataStore.
next_page_token
string
A token, which can be sent as ListExecutionsRequest.page_token
to retrieve the next page. If this field is not populated, there are no subsequent pages.
ListFeatureGroupsRequest
Request message for FeatureRegistryService.ListFeatureGroups
.
parent
string
Required. The resource name of the Location to list FeatureGroups. Format: projects/{project}/locations/{location}
filter
string
Lists the FeatureGroups that match the filter expression. The following fields are supported:
create_time
: Supports=
,!=
,<
,>
,<=
, and>=
comparisons. Values must be in RFC 3339 format.update_time
: Supports=
,!=
,<
,>
,<=
, and>=
comparisons. Values must be in RFC 3339 format.labels
: Supports key-value equality and key presence.
Examples:
create_time > "2020-01-01" OR update_time > "2020-01-01"
FeatureGroups created or updated after 2020-01-01.labels.env = "prod"
FeatureGroups with label "env" set to "prod".
page_size
int32
The maximum number of FeatureGroups to return. The service may return fewer than this value. If unspecified, at most 100 FeatureGroups will be returned. The maximum value is 100; any value greater than 100 will be coerced to 100.
page_token
string
A page token, received from a previous [FeatureGroupAdminService.ListFeatureGroups][] call. Provide this to retrieve the subsequent page.
When paginating, all other parameters provided to [FeatureGroupAdminService.ListFeatureGroups][] must match the call that provided the page token.
order_by
string
A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending. Supported Fields:
create_time
update_time
ListFeatureGroupsResponse
Response message for FeatureRegistryService.ListFeatureGroups
.
The FeatureGroups matching the request.
next_page_token
string
A token, which can be sent as ListFeatureGroupsRequest.page_token
to retrieve the next page. If this field is omitted, there are no subsequent pages.
ListFeatureOnlineStoresRequest
Request message for FeatureOnlineStoreAdminService.ListFeatureOnlineStores
.
parent
string
Required. The resource name of the Location to list FeatureOnlineStores. Format: projects/{project}/locations/{location}
filter
string
Lists the FeatureOnlineStores that match the filter expression. The following fields are supported:
create_time
: Supports=
,!=
,<
,>
,<=
, and>=
comparisons. Values must be in RFC 3339 format.update_time
: Supports=
,!=
,<
,>
,<=
, and>=
comparisons. Values must be in RFC 3339 format.labels
: Supports key-value equality and key presence.
Examples:
create_time > "2020-01-01" OR update_time > "2020-01-01"
FeatureOnlineStores created or updated after 2020-01-01.labels.env = "prod"
FeatureOnlineStores with label "env" set to "prod".
page_size
int32
The maximum number of FeatureOnlineStores to return. The service may return fewer than this value. If unspecified, at most 100 FeatureOnlineStores will be returned. The maximum value is 100; any value greater than 100 will be coerced to 100.
page_token
string
A page token, received from a previous FeatureOnlineStoreAdminService.ListFeatureOnlineStores
call. Provide this to retrieve the subsequent page.
When paginating, all other parameters provided to FeatureOnlineStoreAdminService.ListFeatureOnlineStores
must match the call that provided the page token.
order_by
string
A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending. Supported Fields:
create_time
update_time
ListFeatureOnlineStoresResponse
Response message for FeatureOnlineStoreAdminService.ListFeatureOnlineStores
.
The FeatureOnlineStores matching the request.
next_page_token
string
A token, which can be sent as ListFeatureOnlineStoresRequest.page_token
to retrieve the next page. If this field is omitted, there are no subsequent pages.
ListFeatureViewSyncsRequest
Request message for FeatureOnlineStoreAdminService.ListFeatureViewSyncs
.
parent
string
Required. The resource name of the FeatureView to list FeatureViewSyncs. Format: projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}/featureViews/{feature_view}
filter
string
Lists the FeatureViewSyncs that match the filter expression. The following filters are supported:
create_time
: Supports=
,!=
,<
,>
,>=
, and<=
comparisons. Values must be in RFC 3339 format.
Examples:
create_time > \"2020-01-31T15:30:00.000000Z\"
--> FeatureViewSyncs created after 2020-01-31T15:30:00.000000Z.
page_size
int32
The maximum number of FeatureViewSyncs to return. The service may return fewer than this value. If unspecified, at most 1000 FeatureViewSyncs will be returned. The maximum value is 1000; any value greater than 1000 will be coerced to 1000.
page_token
string
A page token, received from a previous FeatureOnlineStoreAdminService.ListFeatureViewSyncs
call. Provide this to retrieve the subsequent page.
When paginating, all other parameters provided to FeatureOnlineStoreAdminService.ListFeatureViewSyncs
must match the call that provided the page token.
order_by
string
A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending.
Supported fields:
create_time
ListFeatureViewSyncsResponse
Response message for FeatureOnlineStoreAdminService.ListFeatureViewSyncs
.
The FeatureViewSyncs matching the request.
next_page_token
string
A token, which can be sent as ListFeatureViewSyncsRequest.page_token
to retrieve the next page. If this field is omitted, there are no subsequent pages.
ListFeatureViewsRequest
Request message for FeatureOnlineStoreAdminService.ListFeatureViews
.
parent
string
Required. The resource name of the FeatureOnlineStore to list FeatureViews. Format: projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}
filter
string
Lists the FeatureViews that match the filter expression. The following filters are supported:
create_time
: Supports=
,!=
,<
,>
,>=
, and<=
comparisons. Values must be in RFC 3339 format.update_time
: Supports=
,!=
,<
,>
,>=
, and<=
comparisons. Values must be in RFC 3339 format.labels
: Supports key-value equality as well as key presence.
Examples:
create_time > \"2020-01-31T15:30:00.000000Z\" OR update_time > \"2020-01-31T15:30:00.000000Z\"
--> FeatureViews created or updated after 2020-01-31T15:30:00.000000Z.labels.active = yes AND labels.env = prod
--> FeatureViews having both (active: yes) and (env: prod) labels.labels.env: *
--> Any FeatureView which has a label with 'env' as the key.
page_size
int32
The maximum number of FeatureViews to return. The service may return fewer than this value. If unspecified, at most 1000 FeatureViews will be returned. The maximum value is 1000; any value greater than 1000 will be coerced to 1000.
page_token
string
A page token, received from a previous FeatureOnlineStoreAdminService.ListFeatureViews
call. Provide this to retrieve the subsequent page.
When paginating, all other parameters provided to FeatureOnlineStoreAdminService.ListFeatureViews
must match the call that provided the page token.
order_by
string
A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending.
Supported fields:
feature_view_id
create_time
update_time
ListFeatureViewsResponse
Response message for FeatureOnlineStoreAdminService.ListFeatureViews
.
The FeatureViews matching the request.
next_page_token
string
A token, which can be sent as ListFeatureViewsRequest.page_token
to retrieve the next page. If this field is omitted, there are no subsequent pages.
ListFeaturesRequest
Request message for FeaturestoreService.ListFeatures
. Request message for FeatureRegistryService.ListFeatures
.
parent
string
Required. The resource name of the Location to list Features. Format for entity_type as parent: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entity_type}
Format for feature_group as parent: projects/{project}/locations/{location}/featureGroups/{feature_group}
filter
string
Lists the Features that match the filter expression. The following filters are supported:
value_type
: Supports = and != comparisons.create_time
: Supports =, !=, <, >, >=, and <= comparisons. Values must be in RFC 3339 format.update_time
: Supports =, !=, <, >, >=, and <= comparisons. Values must be in RFC 3339 format.labels
: Supports key-value equality as well as key presence.
Examples:
value_type = DOUBLE
--> Features whose type is DOUBLE.create_time > \"2020-01-31T15:30:00.000000Z\" OR update_time > \"2020-01-31T15:30:00.000000Z\"
--> EntityTypes created or updated after 2020-01-31T15:30:00.000000Z.labels.active = yes AND labels.env = prod
--> Features having both (active: yes) and (env: prod) labels.labels.env: *
--> Any Feature which has a label with 'env' as the key.
page_size
int32
The maximum number of Features to return. The service may return fewer than this value. If unspecified, at most 1000 Features will be returned. The maximum value is 1000; any value greater than 1000 will be coerced to 1000.
page_token
string
A page token, received from a previous FeaturestoreService.ListFeatures
call or FeatureRegistryService.ListFeatures
call. Provide this to retrieve the subsequent page.
When paginating, all other parameters provided to FeaturestoreService.ListFeatures
or FeatureRegistryService.ListFeatures
must match the call that provided the page token.
order_by
string
A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending. Supported fields:
feature_id
value_type
(Not supported for FeatureRegistry Feature)create_time
update_time
Mask specifying which fields to read.
latest_stats_count
int32
Only applicable for Vertex AI Feature Store (Legacy). If set, return the most recent ListFeaturesRequest.latest_stats_count
of stats for each Feature in response. Valid value is [0, 10]. If number of stats exists < ListFeaturesRequest.latest_stats_count
, return all existing stats.
ListFeaturesResponse
Response message for FeaturestoreService.ListFeatures
. Response message for FeatureRegistryService.ListFeatures
.
The Features matching the request.
next_page_token
string
A token, which can be sent as ListFeaturesRequest.page_token
to retrieve the next page. If this field is omitted, there are no subsequent pages.
ListFeaturestoresRequest
Request message for FeaturestoreService.ListFeaturestores
.
parent
string
Required. The resource name of the Location to list Featurestores. Format: projects/{project}/locations/{location}
filter
string
Lists the featurestores that match the filter expression. The following fields are supported:
create_time
: Supports=
,!=
,<
,>
,<=
, and>=
comparisons. Values must be in RFC 3339 format.update_time
: Supports=
,!=
,<
,>
,<=
, and>=
comparisons. Values must be in RFC 3339 format.online_serving_config.fixed_node_count
: Supports=
,!=
,<
,>
,<=
, and>=
comparisons.labels
: Supports key-value equality and key presence.
Examples:
create_time > "2020-01-01" OR update_time > "2020-01-01"
Featurestores created or updated after 2020-01-01.labels.env = "prod"
Featurestores with label "env" set to "prod".
page_size
int32
The maximum number of Featurestores to return. The service may return fewer than this value. If unspecified, at most 100 Featurestores will be returned. The maximum value is 100; any value greater than 100 will be coerced to 100.
page_token
string
A page token, received from a previous FeaturestoreService.ListFeaturestores
call. Provide this to retrieve the subsequent page.
When paginating, all other parameters provided to FeaturestoreService.ListFeaturestores
must match the call that provided the page token.
order_by
string
A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending. Supported Fields:
create_time
update_time
online_serving_config.fixed_node_count
Mask specifying which fields to read.
ListFeaturestoresResponse
Response message for FeaturestoreService.ListFeaturestores
.
The Featurestores matching the request.
next_page_token
string
A token, which can be sent as ListFeaturestoresRequest.page_token
to retrieve the next page. If this field is omitted, there are no subsequent pages.
ListHyperparameterTuningJobsRequest
Request message for JobService.ListHyperparameterTuningJobs
.
parent
string
Required. The resource name of the Location to list the HyperparameterTuningJobs from. Format: projects/{project}/locations/{location}
filter
string
The standard list filter.
Supported fields:
display_name
supports=
,!=
comparisons, and:
wildcard.state
supports=
,!=
comparisons.create_time
supports=
,!=
,<
,<=
,>
,>=
comparisons.create_time
must be in RFC 3339 format.labels
supports general map functions that is:labels.key=value
- key:value equality `labels.key:* - key existence
Some examples of using the filter are:
state="JOB_STATE_SUCCEEDED" AND display_name:"my_job_*"
state!="JOB_STATE_FAILED" OR display_name="my_job"
NOT display_name="my_job"
create_time>"2021-05-18T00:00:00Z"
labels.keyA=valueA
labels.keyB:*
page_size
int32
The standard list page size.
page_token
string
The standard list page token. Typically obtained via ListHyperparameterTuningJobsResponse.next_page_token
of the previous JobService.ListHyperparameterTuningJobs
call.
Mask specifying which fields to read.
ListHyperparameterTuningJobsResponse
Response message for JobService.ListHyperparameterTuningJobs
List of HyperparameterTuningJobs in the requested page. HyperparameterTuningJob.trials
of the jobs will be not be returned.
next_page_token
string
A token to retrieve the next page of results. Pass to ListHyperparameterTuningJobsRequest.page_token
to obtain that page.
ListIndexEndpointsRequest
Request message for IndexEndpointService.ListIndexEndpoints
.
parent
string
Required. The resource name of the Location from which to list the IndexEndpoints. Format: projects/{project}/locations/{location}
filter
string
Optional. An expression for filtering the results of the request. For field names both snake_case and camelCase are supported.
index_endpoint
supports = and !=.index_endpoint
represents the IndexEndpoint ID, ie. the last segment of the IndexEndpoint'sresourcename
.display_name
supports =, != and regex() (uses re2 syntax)labels
supports general map functions that is:labels.key=value
- key:value equalitylabels.key:* or labels:key - key existence A key including a space must be quoted.
labels."a key"`.
Some examples: * index_endpoint="1"
* display_name="myDisplayName"
* regex(display_name, "^A") -> The display name starts with an A.
*
labels.myKey="myValue"`
page_size
int32
Optional. The standard list page size.
page_token
string
Optional. The standard list page token. Typically obtained via ListIndexEndpointsResponse.next_page_token
of the previous IndexEndpointService.ListIndexEndpoints
call.
Optional. Mask specifying which fields to read.
ListIndexEndpointsResponse
Response message for IndexEndpointService.ListIndexEndpoints
.
List of IndexEndpoints in the requested page.
next_page_token
string
A token to retrieve next page of results. Pass to ListIndexEndpointsRequest.page_token
to obtain that page.
ListIndexesRequest
Request message for IndexService.ListIndexes
.
parent
string
Required. The resource name of the Location from which to list the Indexes. Format: projects/{project}/locations/{location}
filter
string
The standard list filter.
page_size
int32
The standard list page size.
page_token
string
The standard list page token. Typically obtained via ListIndexesResponse.next_page_token
of the previous IndexService.ListIndexes
call.
Mask specifying which fields to read.
ListIndexesResponse
Response message for IndexService.ListIndexes
.
List of indexes in the requested page.
next_page_token
string
A token to retrieve next page of results. Pass to ListIndexesRequest.page_token
to obtain that page.
ListMetadataSchemasRequest
Request message for MetadataService.ListMetadataSchemas
.
parent
string
Required. The MetadataStore whose MetadataSchemas should be listed. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}
page_size
int32
The maximum number of MetadataSchemas to return. The service may return fewer. Must be in range 1-1000, inclusive. Defaults to 100.
page_token
string
A page token, received from a previous MetadataService.ListMetadataSchemas
call. Provide this to retrieve the next page.
When paginating, all other provided parameters must match the call that provided the page token. (Otherwise the request will fail with INVALID_ARGUMENT error.)
filter
string
A query to filter available MetadataSchemas for matching results.
ListMetadataSchemasResponse
Response message for MetadataService.ListMetadataSchemas
.
The MetadataSchemas found for the MetadataStore.
next_page_token
string
A token, which can be sent as ListMetadataSchemasRequest.page_token
to retrieve the next page. If this field is not populated, there are no subsequent pages.
ListMetadataStoresRequest
Request message for MetadataService.ListMetadataStores
.
parent
string
Required. The Location whose MetadataStores should be listed. Format: projects/{project}/locations/{location}
page_size
int32
The maximum number of Metadata Stores to return. The service may return fewer. Must be in range 1-1000, inclusive. Defaults to 100.
page_token
string
A page token, received from a previous MetadataService.ListMetadataStores
call. Provide this to retrieve the subsequent page.
When paginating, all other provided parameters must match the call that provided the page token. (Otherwise the request will fail with INVALID_ARGUMENT error.)
ListMetadataStoresResponse
Response message for MetadataService.ListMetadataStores
.
The MetadataStores found for the Location.
next_page_token
string
A token, which can be sent as ListMetadataStoresRequest.page_token
to retrieve the next page. If this field is not populated, there are no subsequent pages.
ListModelDeploymentMonitoringJobsRequest
Request message for JobService.ListModelDeploymentMonitoringJobs
.
parent
string
Required. The parent of the ModelDeploymentMonitoringJob. Format: projects/{project}/locations/{location}
filter
string
The standard list filter.
Supported fields:
display_name
supports=
,!=
comparisons, and:
wildcard.state
supports=
,!=
comparisons.create_time
supports=
,!=
,<
,<=
,>
,>=
comparisons.create_time
must be in RFC 3339 format.labels
supports general map functions that is:labels.key=value
- key:value equality `labels.key:* - key existence
Some examples of using the filter are:
state="JOB_STATE_SUCCEEDED" AND display_name:"my_job_*"
state!="JOB_STATE_FAILED" OR display_name="my_job"
NOT display_name="my_job"
create_time>"2021-05-18T00:00:00Z"
labels.keyA=valueA
labels.keyB:*
page_size
int32
The standard list page size.
page_token
string
The standard list page token.
Mask specifying which fields to read
ListModelDeploymentMonitoringJobsResponse
Response message for JobService.ListModelDeploymentMonitoringJobs
.
A list of ModelDeploymentMonitoringJobs that matches the specified filter in the request.
next_page_token
string
The standard List next-page token.
ListModelEvaluationSlicesRequest
Request message for ModelService.ListModelEvaluationSlices
.
parent
string
Required. The resource name of the ModelEvaluation to list the ModelEvaluationSlices from. Format: projects/{project}/locations/{location}/models/{model}/evaluations/{evaluation}
filter
string
The standard list filter.
slice.dimension
- for =.
page_size
int32
The standard list page size.
page_token
string
The standard list page token. Typically obtained via ListModelEvaluationSlicesResponse.next_page_token
of the previous ModelService.ListModelEvaluationSlices
call.
Mask specifying which fields to read.
ListModelEvaluationSlicesResponse
Response message for ModelService.ListModelEvaluationSlices
.
List of ModelEvaluations in the requested page.
next_page_token
string
A token to retrieve next page of results. Pass to ListModelEvaluationSlicesRequest.page_token
to obtain that page.
ListModelEvaluationsRequest
Request message for ModelService.ListModelEvaluations
.
parent
string
Required. The resource name of the Model to list the ModelEvaluations from. Format: projects/{project}/locations/{location}/models/{model}
filter
string
The standard list filter.
page_size
int32
The standard list page size.
page_token
string
The standard list page token. Typically obtained via ListModelEvaluationsResponse.next_page_token
of the previous ModelService.ListModelEvaluations
call.
Mask specifying which fields to read.
ListModelEvaluationsResponse
Response message for ModelService.ListModelEvaluations
.
List of ModelEvaluations in the requested page.
next_page_token
string
A token to retrieve next page of results. Pass to ListModelEvaluationsRequest.page_token
to obtain that page.
ListModelVersionsRequest
Request message for ModelService.ListModelVersions
.
name
string
Required. The name of the model to list versions for.
page_size
int32
The standard list page size.
page_token
string
The standard list page token. Typically obtained via next_page_token
of the previous ListModelVersions
call.
filter
string
An expression for filtering the results of the request. For field names both snake_case and camelCase are supported.
labels
supports general map functions that is:labels.key=value
- key:value equality- `labels.key:* or labels:key - key existence
- A key including a space must be quoted.
labels."a key"
.
Some examples:
labels.myKey="myValue"
Mask specifying which fields to read.
order_by
string
A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending. Supported fields:
create_time
update_time
Example: update_time asc, create_time desc
.
ListModelVersionsResponse
Response message for ModelService.ListModelVersions
List of Model versions in the requested page. In the returned Model name field, version ID instead of regvision tag will be included.
next_page_token
string
A token to retrieve the next page of results. Pass to ListModelVersionsRequest.page_token
to obtain that page.
ListModelsRequest
Request message for ModelService.ListModels
.
parent
string
Required. The resource name of the Location to list the Models from. Format: projects/{project}/locations/{location}
filter
string
An expression for filtering the results of the request. For field names both snake_case and camelCase are supported.
model
supports = and !=.model
represents the Model ID, i.e. the last segment of the Model'sresource name
.display_name
supports = and !=labels
supports general map functions that is:labels.key=value
- key:value equality- `labels.key:* or labels:key - key existence
- A key including a space must be quoted.
labels."a key"
.
base_model_name
only supports =
Some examples:
model=1234
displayName="myDisplayName"
labels.myKey="myValue"
baseModelName="text-bison"
page_size
int32
The standard list page size.
page_token
string
The standard list page token. Typically obtained via ListModelsResponse.next_page_token
of the previous ModelService.ListModels
call.
Mask specifying which fields to read.
order_by
string
A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending. Supported fields:
display_name
create_time
update_time
Example: display_name, create_time desc
.
ListModelsResponse
Response message for ModelService.ListModels
List of Models in the requested page.
next_page_token
string
A token to retrieve next page of results. Pass to ListModelsRequest.page_token
to obtain that page.
ListNasJobsRequest
Request message for JobService.ListNasJobs
.
parent
string
Required. The resource name of the Location to list the NasJobs from. Format: projects/{project}/locations/{location}
filter
string
The standard list filter.
Supported fields:
display_name
supports=
,!=
comparisons, and:
wildcard.state
supports=
,!=
comparisons.create_time
supports=
,!=
,<
,<=
,>
,>=
comparisons.create_time
must be in RFC 3339 format.labels
supports general map functions that is:labels.key=value
- key:value equality `labels.key:* - key existence
Some examples of using the filter are:
state="JOB_STATE_SUCCEEDED" AND display_name:"my_job_*"
state!="JOB_STATE_FAILED" OR display_name="my_job"
NOT display_name="my_job"
create_time>"2021-05-18T00:00:00Z"
labels.keyA=valueA
labels.keyB:*
page_size
int32
The standard list page size.
page_token
string
The standard list page token. Typically obtained via ListNasJobsResponse.next_page_token
of the previous JobService.ListNasJobs
call.
Mask specifying which fields to read.
ListNasJobsResponse
Response message for JobService.ListNasJobs
List of NasJobs in the requested page. NasJob.nas_job_output
of the jobs will not be returned.
next_page_token
string
A token to retrieve the next page of results. Pass to ListNasJobsRequest.page_token
to obtain that page.
ListNasTrialDetailsRequest
Request message for JobService.ListNasTrialDetails
.
parent
string
Required. The name of the NasJob resource. Format: projects/{project}/locations/{location}/nasJobs/{nas_job}
page_size
int32
The standard list page size.
page_token
string
The standard list page token. Typically obtained via ListNasTrialDetailsResponse.next_page_token
of the previous JobService.ListNasTrialDetails
call.
ListNasTrialDetailsResponse
Response message for JobService.ListNasTrialDetails
List of top NasTrials in the requested page.
next_page_token
string
A token to retrieve the next page of results. Pass to ListNasTrialDetailsRequest.page_token
to obtain that page.
ListNotebookExecutionJobsRequest
Request message for [NotebookService.ListNotebookExecutionJobs]
parent
string
Required. The resource name of the Location from which to list the NotebookExecutionJobs. Format: projects/{project}/locations/{location}
filter
string
Optional. An expression for filtering the results of the request. For field names both snake_case and camelCase are supported.
notebookExecutionJob
supports = and !=.notebookExecutionJob
represents the NotebookExecutionJob ID.displayName
supports = and != and regex.schedule
supports = and != and regex.
Some examples: * notebookExecutionJob="123"
* notebookExecutionJob="my-execution-job"
* displayName="myDisplayName"
and displayName=~"myDisplayNameRegex"
page_size
int32
Optional. The standard list page size.
page_token
string
Optional. The standard list page token. Typically obtained via [ListNotebookExecutionJobs.next_page_token][] of the previous NotebookService.ListNotebookExecutionJobs
call.
order_by
string
Optional. A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending. Supported fields:
display_name
create_time
update_time
Example: display_name, create_time desc
.
Optional. The NotebookExecutionJob view. Defaults to BASIC.
ListNotebookExecutionJobsResponse
Response message for [NotebookService.CreateNotebookExecutionJob]
List of NotebookExecutionJobs in the requested page.
next_page_token
string
A token to retrieve next page of results. Pass to [ListNotebookExecutionJobs.page_token][] to obtain that page.
ListNotebookRuntimeTemplatesRequest
Request message for NotebookService.ListNotebookRuntimeTemplates
.
parent
string
Required. The resource name of the Location from which to list the NotebookRuntimeTemplates. Format: projects/{project}/locations/{location}
filter
string
Optional. An expression for filtering the results of the request. For field names both snake_case and camelCase are supported.
notebookRuntimeTemplate
supports = and !=.notebookRuntimeTemplate
represents the NotebookRuntimeTemplate ID, i.e. the last segment of the NotebookRuntimeTemplate'sresource name
.display_name
supports = and !=labels
supports general map functions that is:labels.key=value
- key:value equality- `labels.key:* or labels:key - key existence
- A key including a space must be quoted.
labels."a key"
.
notebookRuntimeType
supports = and !=. notebookRuntimeType enum: [USER_DEFINED, ONE_CLICK].
Some examples:
notebookRuntimeTemplate=notebookRuntimeTemplate123
displayName="myDisplayName"
labels.myKey="myValue"
notebookRuntimeType=USER_DEFINED
page_size
int32
Optional. The standard list page size.
page_token
string
Optional. The standard list page token. Typically obtained via ListNotebookRuntimeTemplatesResponse.next_page_token
of the previous NotebookService.ListNotebookRuntimeTemplates
call.
Optional. Mask specifying which fields to read.
order_by
string
Optional. A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending. Supported fields:
display_name
create_time
update_time
Example: display_name, create_time desc
.
ListNotebookRuntimeTemplatesResponse
Response message for NotebookService.ListNotebookRuntimeTemplates
.
List of NotebookRuntimeTemplates in the requested page.
next_page_token
string
A token to retrieve next page of results. Pass to ListNotebookRuntimeTemplatesRequest.page_token
to obtain that page.
ListNotebookRuntimesRequest
Request message for NotebookService.ListNotebookRuntimes
.
parent
string
Required. The resource name of the Location from which to list the NotebookRuntimes. Format: projects/{project}/locations/{location}
filter
string
Optional. An expression for filtering the results of the request. For field names both snake_case and camelCase are supported.
notebookRuntime
supports = and !=.notebookRuntime
represents the NotebookRuntime ID, i.e. the last segment of the NotebookRuntime'sresource name
.displayName
supports = and != and regex.notebookRuntimeTemplate
supports = and !=.notebookRuntimeTemplate
represents the NotebookRuntimeTemplate ID, i.e. the last segment of the NotebookRuntimeTemplate'sresource name
.healthState
supports = and !=. healthState enum: [HEALTHY, UNHEALTHY, HEALTH_STATE_UNSPECIFIED].runtimeState
supports = and !=. runtimeState enum: [RUNTIME_STATE_UNSPECIFIED, RUNNING, BEING_STARTED, BEING_STOPPED, STOPPED, BEING_UPGRADED, ERROR, INVALID].runtimeUser
supports = and !=.- API version is UI only:
uiState
supports = and !=. uiState enum: [UI_RESOURCE_STATE_UNSPECIFIED, UI_RESOURCE_STATE_BEING_CREATED, UI_RESOURCE_STATE_ACTIVE, UI_RESOURCE_STATE_BEING_DELETED, UI_RESOURCE_STATE_CREATION_FAILED]. notebookRuntimeType
supports = and !=. notebookRuntimeType enum: [USER_DEFINED, ONE_CLICK].
Some examples:
notebookRuntime="notebookRuntime123"
displayName="myDisplayName"
anddisplayName=~"myDisplayNameRegex"
notebookRuntimeTemplate="notebookRuntimeTemplate321"
healthState=HEALTHY
runtimeState=RUNNING
runtimeUser="test@google.com"
uiState=UI_RESOURCE_STATE_BEING_DELETED
notebookRuntimeType=USER_DEFINED
page_size
int32
Optional. The standard list page size.
page_token
string
Optional. The standard list page token. Typically obtained via ListNotebookRuntimesResponse.next_page_token
of the previous NotebookService.ListNotebookRuntimes
call.
Optional. Mask specifying which fields to read.
order_by
string
Optional. A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending. Supported fields:
display_name
create_time
update_time
Example: display_name, create_time desc
.
ListNotebookRuntimesResponse
Response message for NotebookService.ListNotebookRuntimes
.
List of NotebookRuntimes in the requested page.
next_page_token
string
A token to retrieve next page of results. Pass to ListNotebookRuntimesRequest.page_token
to obtain that page.
ListOptimalTrialsRequest
Request message for VizierService.ListOptimalTrials
.
parent
string
Required. The name of the Study that the optimal Trial belongs to.
ListOptimalTrialsResponse
Response message for VizierService.ListOptimalTrials
.
The pareto-optimal Trials for multiple objective Study or the optimal trial for single objective Study. The definition of pareto-optimal can be checked in wiki page. https://en.wikipedia.org/wiki/Pareto_efficiency
ListPersistentResourcesRequest
Request message for [PersistentResourceService.ListPersistentResource][].
parent
string
Required. The resource name of the Location to list the PersistentResources from. Format: projects/{project}/locations/{location}
page_size
int32
Optional. The standard list page size.
page_token
string
Optional. The standard list page token. Typically obtained via [ListPersistentResourceResponse.next_page_token][] of the previous [PersistentResourceService.ListPersistentResource][] call.
ListPersistentResourcesResponse
Response message for PersistentResourceService.ListPersistentResources
next_page_token
string
A token to retrieve next page of results. Pass to ListPersistentResourcesRequest.page_token
to obtain that page.
ListPipelineJobsRequest
Request message for PipelineService.ListPipelineJobs
.
parent
string
Required. The resource name of the Location to list the PipelineJobs from. Format: projects/{project}/locations/{location}
filter
string
Lists the PipelineJobs that match the filter expression. The following fields are supported:
pipeline_name
: Supports=
and!=
comparisons.display_name
: Supports=
,!=
comparisons, and:
wildcard.pipeline_job_user_id
: Supports=
,!=
comparisons, and:
wildcard. for example, can check if pipeline's display_name contains step by doing display_name:"*step*"state
: Supports=
and!=
comparisons.create_time
: Supports=
,!=
,<
,>
,<=
, and>=
comparisons. Values must be in RFC 3339 format.update_time
: Supports=
,!=
,<
,>
,<=
, and>=
comparisons. Values must be in RFC 3339 format.end_time
: Supports=
,!=
,<
,>
,<=
, and>=
comparisons. Values must be in RFC 3339 format.labels
: Supports key-value equality and key presence.template_uri
: Supports=
,!=
comparisons, and:
wildcard.template_metadata.version
: Supports=
,!=
comparisons, and:
wildcard.
Filter expressions can be combined together using logical operators (AND
& OR
). For example: pipeline_name="test" AND create_time>"2020-05-18T13:30:00Z"
.
The syntax to define filter expression is based on https://google.aip.dev/160.
Examples:
create_time>"2021-05-18T00:00:00Z" OR update_time>"2020-05-18T00:00:00Z"
PipelineJobs created or updated after 2020-05-18 00:00:00 UTC.labels.env = "prod"
PipelineJobs with label "env" set to "prod".
page_size
int32
The standard list page size.
page_token
string
The standard list page token. Typically obtained via ListPipelineJobsResponse.next_page_token
of the previous PipelineService.ListPipelineJobs
call.
order_by
string
A comma-separated list of fields to order by. The default sort order is in ascending order. Use "desc" after a field name for descending. You can have multiple order_by fields provided e.g. "create_time desc, end_time", "end_time, start_time, update_time" For example, using "create_time desc, end_time" will order results by create time in descending order, and if there are multiple jobs having the same create time, order them by the end time in ascending order. if order_by is not specified, it will order by default order is create time in descending order. Supported fields:
create_time
update_time
end_time
start_time
Mask specifying which fields to read.
ListPipelineJobsResponse
Response message for PipelineService.ListPipelineJobs
List of PipelineJobs in the requested page.
next_page_token
string
A token to retrieve the next page of results. Pass to ListPipelineJobsRequest.page_token
to obtain that page.
ListSavedQueriesRequest
Request message for DatasetService.ListSavedQueries
.
parent
string
Required. The resource name of the Dataset to list SavedQueries from. Format: projects/{project}/locations/{location}/datasets/{dataset}
filter
string
The standard list filter.
page_size
int32
The standard list page size.
page_token
string
The standard list page token.
Mask specifying which fields to read.
order_by
string
A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending.
ListSavedQueriesResponse
Response message for DatasetService.ListSavedQueries
.
A list of SavedQueries that match the specified filter in the request.
next_page_token
string
The standard List next-page token.
ListSchedulesRequest
Request message for ScheduleService.ListSchedules
.
parent
string
Required. The resource name of the Location to list the Schedules from. Format: projects/{project}/locations/{location}
filter
string
Lists the Schedules that match the filter expression. The following fields are supported:
display_name
: Supports=
,!=
comparisons, and:
wildcard.state
: Supports=
and!=
comparisons.request
: Supports existence of thecheck. (e.g. create_pipeline_job_request:*
--> Schedule has create_pipeline_job_request).create_time
: Supports=
,!=
,<
,>
,<=
, and>=
comparisons. Values must be in RFC 3339 format.start_time
: Supports=
,!=
,<
,>
,<=
, and>=
comparisons. Values must be in RFC 3339 format.end_time
: Supports=
,!=
,<
,>
,<=
,>=
comparisons and:*
existence check. Values must be in RFC 3339 format.next_run_time
: Supports=
,!=
,<
,>
,<=
, and>=
comparisons. Values must be in RFC 3339 format.
Filter expressions can be combined together using logical operators (NOT
, AND
& OR
). The syntax to define filter expression is based on https://google.aip.dev/160.
Examples:
state="ACTIVE" AND display_name:"my_schedule_*"
NOT display_name="my_schedule"
create_time>"2021-05-18T00:00:00Z"
end_time>"2021-05-18T00:00:00Z" OR NOT end_time:*
create_pipeline_job_request:*
page_size
int32
The standard list page size. Default to 100 if not specified.
page_token
string
The standard list page token. Typically obtained via ListSchedulesResponse.next_page_token
of the previous ScheduleService.ListSchedules
call.
order_by
string
A comma-separated list of fields to order by. The default sort order is in ascending order. Use "desc" after a field name for descending. You can have multiple order_by fields provided.
For example, using "create_time desc, end_time" will order results by create time in descending order, and if there are multiple schedules having the same create time, order them by the end time in ascending order.
If order_by is not specified, it will order by default with create_time in descending order.
Supported fields:
create_time
start_time
end_time
next_run_time
ListSchedulesResponse
Response message for ScheduleService.ListSchedules
List of Schedules in the requested page.
next_page_token
string
A token to retrieve the next page of results. Pass to ListSchedulesRequest.page_token
to obtain that page.
ListSpecialistPoolsRequest
Request message for SpecialistPoolService.ListSpecialistPools
.
parent
string
Required. The name of the SpecialistPool's parent resource. Format: projects/{project}/locations/{location}
page_size
int32
The standard list page size.
page_token
string
The standard list page token. Typically obtained by ListSpecialistPoolsResponse.next_page_token
of the previous SpecialistPoolService.ListSpecialistPools
call. Return first page if empty.
Mask specifying which fields to read. FieldMask represents a set of
ListSpecialistPoolsResponse
Response message for SpecialistPoolService.ListSpecialistPools
.
A list of SpecialistPools that matches the specified filter in the request.
next_page_token
string
The standard List next-page token.
ListStudiesRequest
Request message for VizierService.ListStudies
.
parent
string
Required. The resource name of the Location to list the Study from. Format: projects/{project}/locations/{location}
page_token
string
Optional. A page token to request the next page of results. If unspecified, there are no subsequent pages.
page_size
int32
Optional. The maximum number of studies to return per "page" of results. If unspecified, service will pick an appropriate default.
ListStudiesResponse
Response message for VizierService.ListStudies
.
The studies associated with the project.
next_page_token
string
Passes this token as the page_token
field of the request for a subsequent call. If this field is omitted, there are no subsequent pages.
ListTensorboardExperimentsRequest
Request message for TensorboardService.ListTensorboardExperiments
.
parent
string
Required. The resource name of the Tensorboard to list TensorboardExperiments. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
filter
string
Lists the TensorboardExperiments that match the filter expression.
page_size
int32
The maximum number of TensorboardExperiments to return. The service may return fewer than this value. If unspecified, at most 50 TensorboardExperiments are returned. The maximum value is 1000; values above 1000 are coerced to 1000.
page_token
string
A page token, received from a previous TensorboardService.ListTensorboardExperiments
call. Provide this to retrieve the subsequent page.
When paginating, all other parameters provided to TensorboardService.ListTensorboardExperiments
must match the call that provided the page token.
order_by
string
Field to use to sort the list.
Mask specifying which fields to read.
ListTensorboardExperimentsResponse
Response message for TensorboardService.ListTensorboardExperiments
.
The TensorboardExperiments mathching the request.
next_page_token
string
A token, which can be sent as ListTensorboardExperimentsRequest.page_token
to retrieve the next page. If this field is omitted, there are no subsequent pages.
ListTensorboardRunsRequest
Request message for TensorboardService.ListTensorboardRuns
.
parent
string
Required. The resource name of the TensorboardExperiment to list TensorboardRuns. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}
filter
string
Lists the TensorboardRuns that match the filter expression.
page_size
int32
The maximum number of TensorboardRuns to return. The service may return fewer than this value. If unspecified, at most 50 TensorboardRuns are returned. The maximum value is 1000; values above 1000 are coerced to 1000.
page_token
string
A page token, received from a previous TensorboardService.ListTensorboardRuns
call. Provide this to retrieve the subsequent page.
When paginating, all other parameters provided to TensorboardService.ListTensorboardRuns
must match the call that provided the page token.
order_by
string
Field to use to sort the list.
Mask specifying which fields to read.
ListTensorboardRunsResponse
Response message for TensorboardService.ListTensorboardRuns
.
The TensorboardRuns mathching the request.
next_page_token
string
A token, which can be sent as ListTensorboardRunsRequest.page_token
to retrieve the next page. If this field is omitted, there are no subsequent pages.
ListTensorboardTimeSeriesRequest
Request message for TensorboardService.ListTensorboardTimeSeries
.
parent
string
Required. The resource name of the TensorboardRun to list TensorboardTimeSeries. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}/runs/{run}
filter
string
Lists the TensorboardTimeSeries that match the filter expression.
page_size
int32
The maximum number of TensorboardTimeSeries to return. The service may return fewer than this value. If unspecified, at most 50 TensorboardTimeSeries are returned. The maximum value is 1000; values above 1000 are coerced to 1000.
page_token
string
A page token, received from a previous TensorboardService.ListTensorboardTimeSeries
call. Provide this to retrieve the subsequent page.
When paginating, all other parameters provided to TensorboardService.ListTensorboardTimeSeries
must match the call that provided the page token.
order_by
string
Field to use to sort the list.
Mask specifying which fields to read.
ListTensorboardTimeSeriesResponse
Response message for TensorboardService.ListTensorboardTimeSeries
.
The TensorboardTimeSeries mathching the request.
next_page_token
string
A token, which can be sent as ListTensorboardTimeSeriesRequest.page_token
to retrieve the next page. If this field is omitted, there are no subsequent pages.
ListTensorboardsRequest
Request message for TensorboardService.ListTensorboards
.
parent
string
Required. The resource name of the Location to list Tensorboards. Format: projects/{project}/locations/{location}
filter
string
Lists the Tensorboards that match the filter expression.
page_size
int32
The maximum number of Tensorboards to return. The service may return fewer than this value. If unspecified, at most 100 Tensorboards are returned. The maximum value is 100; values above 100 are coerced to 100.
page_token
string
A page token, received from a previous TensorboardService.ListTensorboards
call. Provide this to retrieve the subsequent page.
When paginating, all other parameters provided to TensorboardService.ListTensorboards
must match the call that provided the page token.
order_by
string
Field to use to sort the list.
Mask specifying which fields to read.
ListTensorboardsResponse
Response message for TensorboardService.ListTensorboards
.
The Tensorboards mathching the request.
next_page_token
string
A token, which can be sent as ListTensorboardsRequest.page_token
to retrieve the next page. If this field is omitted, there are no subsequent pages.
ListTrainingPipelinesRequest
Request message for PipelineService.ListTrainingPipelines
.
parent
string
Required. The resource name of the Location to list the TrainingPipelines from. Format: projects/{project}/locations/{location}
filter
string
The standard list filter.
Supported fields:
display_name
supports=
,!=
comparisons, and:
wildcard.state
supports=
,!=
comparisons.training_task_definition
=
,!=
comparisons, and:
wildcard.create_time
supports=
,!=
,<
,<=
,>
,>=
comparisons.create_time
must be in RFC 3339 format.labels
supports general map functions that is:labels.key=value
- key:value equality `labels.key:* - key existence
Some examples of using the filter are:
state="PIPELINE_STATE_SUCCEEDED" AND display_name:"my_pipeline_*"
state!="PIPELINE_STATE_FAILED" OR display_name="my_pipeline"
NOT display_name="my_pipeline"
create_time>"2021-05-18T00:00:00Z"
training_task_definition:"*automl_text_classification*"
page_size
int32
The standard list page size.
page_token
string
The standard list page token. Typically obtained via ListTrainingPipelinesResponse.next_page_token
of the previous PipelineService.ListTrainingPipelines
call.
Mask specifying which fields to read.
ListTrainingPipelinesResponse
Response message for PipelineService.ListTrainingPipelines
List of TrainingPipelines in the requested page.
next_page_token
string
A token to retrieve the next page of results. Pass to ListTrainingPipelinesRequest.page_token
to obtain that page.
ListTrialsRequest
Request message for VizierService.ListTrials
.
parent
string
Required. The resource name of the Study to list the Trial from. Format: projects/{project}/locations/{location}/studies/{study}
page_token
string
Optional. A page token to request the next page of results. If unspecified, there are no subsequent pages.
page_size
int32
Optional. The number of Trials to retrieve per "page" of results. If unspecified, the service will pick an appropriate default.
ListTrialsResponse
Response message for VizierService.ListTrials
.
The Trials associated with the Study.
next_page_token
string
Pass this token as the page_token
field of the request for a subsequent call. If this field is omitted, there are no subsequent pages.
ListTuningJobsRequest
Request message for GenAiTuningService.ListTuningJobs
.
parent
string
Required. The resource name of the Location to list the TuningJobs from. Format: projects/{project}/locations/{location}
filter
string
Optional. The standard list filter.
page_size
int32
Optional. The standard list page size.
page_token
string
Optional. The standard list page token. Typically obtained via [ListTuningJob.next_page_token][] of the previous GenAiTuningService.ListTuningJob][] call.
ListTuningJobsResponse
Response message for GenAiTuningService.ListTuningJobs
List of TuningJobs in the requested page.
next_page_token
string
A token to retrieve the next page of results. Pass to ListTuningJobsRequest.page_token
to obtain that page.
LogprobsResult
Logprobs Result
Length = total number of decoding steps.
Length = total number of decoding steps. The chosen candidates may or may not be in top_candidates.
Candidate
Candidate for the logprobs token and score.
token
string
The candidate's token string value.
token_id
int32
The candidate's token id value.
log_probability
float
The candidate's log probability.
TopCandidates
Candidates with top log probabilities at each decoding step.
Sorted by log probability in descending order.
LookupStudyRequest
Request message for VizierService.LookupStudy
.
parent
string
Required. The resource name of the Location to get the Study from. Format: projects/{project}/locations/{location}
display_name
string
Required. The user-defined display name of the Study
MachineSpec
Specification of a single machine.
machine_type
string
Immutable. The type of the machine.
See the list of machine types supported for prediction
See the list of machine types supported for custom training.
For DeployedModel
this field is optional, and the default value is n1-standard-2
. For BatchPredictionJob
or as part of WorkerPoolSpec
this field is required.
Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count
.
accelerator_count
int32
The number of accelerators to attach to the machine.
tpu_topology
string
Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
Optional. Immutable. Configuration controlling how this resource pool consumes reservation.
ManualBatchTuningParameters
Manual batch tuning parameters.
batch_size
int32
Immutable. The number of the records (e.g. instances) of the operation given in each batch to a machine replica. Machine type, and size of a single record should be considered when setting this parameter, higher value speeds up the batch operation's execution, but too high value will result in a whole batch not fitting in a machine's memory, and the whole operation will fail. The default value is 64.
Measurement
A message representing a Measurement of a Trial. A Measurement contains the Metrics got by executing a Trial using suggested hyperparameter values.
Output only. Time that the Trial has been running at the point of this Measurement.
step_count
int64
Output only. The number of steps the machine learning model has been trained for. Must be non-negative.
Output only. A list of metrics got by evaluating the objective functions using suggested Parameter values.
Metric
A message representing a metric in the measurement.
metric_id
string
Output only. The ID of the Metric. The Metric should be defined in StudySpec's Metrics
.
value
double
Output only. The value for this metric.
MergeVersionAliasesRequest
Request message for ModelService.MergeVersionAliases
.
name
string
Required. The name of the model version to merge aliases, with a version ID explicitly included.
Example: projects/{project}/locations/{location}/models/{model}@1234
version_aliases[]
string
Required. The set of version aliases to merge. The alias should be at most 128 characters, and match [a-z][a-zA-Z0-9-]{0,126}[a-z-0-9]
. Add the -
prefix to an alias means removing that alias from the version. -
is NOT counted in the 128 characters. Example: -golden
means removing the golden
alias from the version.
There is NO ordering in aliases, which means 1) The aliases returned from GetModel API might not have the exactly same order from this MergeVersionAliases API. 2) Adding and deleting the same alias in the request is not recommended, and the 2 operations will be cancelled out.
MetadataSchema
Instance of a general MetadataSchema.
name
string
Output only. The resource name of the MetadataSchema.
schema_version
string
The version of the MetadataSchema. The version's format must match the following regular expression: ^[0-9]+[.][0-9]+[.][0-9]+$
, which would allow to order/compare different versions. Example: 1.0.0, 1.0.1, etc.
schema
string
Required. The raw YAML string representation of the MetadataSchema. The combination of [MetadataSchema.version] and the schema name given by title
in [MetadataSchema.schema] must be unique within a MetadataStore.
The schema is defined as an OpenAPI 3.0.2 MetadataSchema Object
The type of the MetadataSchema. This is a property that identifies which metadata types will use the MetadataSchema.
Output only. Timestamp when this MetadataSchema was created.
description
string
Description of the Metadata Schema
MetadataSchemaType
Describes the type of the MetadataSchema.
Enums | |
---|---|
METADATA_SCHEMA_TYPE_UNSPECIFIED |
Unspecified type for the MetadataSchema. |
ARTIFACT_TYPE |
A type indicating that the MetadataSchema will be used by Artifacts. |
EXECUTION_TYPE |
A typee indicating that the MetadataSchema will be used by Executions. |
CONTEXT_TYPE |
A state indicating that the MetadataSchema will be used by Contexts. |
MetadataStore
Instance of a metadata store. Contains a set of metadata that can be queried.
name
string
Output only. The resource name of the MetadataStore instance.
Output only. Timestamp when this MetadataStore was created.
Output only. Timestamp when this MetadataStore was last updated.
Customer-managed encryption key spec for a Metadata Store. If set, this Metadata Store and all sub-resources of this Metadata Store are secured using this key.
description
string
Description of the MetadataStore.
Output only. State information of the MetadataStore.
Optional. Dataplex integration settings.
DataplexConfig
Represents Dataplex integration settings.
enabled_pipelines_lineage
bool
Optional. Whether or not Data Lineage synchronization is enabled for Vertex Pipelines.
MetadataStoreState
Represents state information for a MetadataStore.
disk_utilization_bytes
int64
The disk utilization of the MetadataStore in bytes.
MigratableResource
Represents one resource that exists in automl.googleapis.com, datalabeling.googleapis.com or ml.googleapis.com.
Output only. Timestamp when the last migration attempt on this MigratableResource started. Will not be set if there's no migration attempt on this MigratableResource.
Output only. Timestamp when this MigratableResource was last updated.
Union field resource
.
resource
can be only one of the following:
Output only. Represents one Version in ml.googleapis.com.
Output only. Represents one Model in automl.googleapis.com.
Output only. Represents one Dataset in automl.googleapis.com.
Output only. Represents one Dataset in datalabeling.googleapis.com.
AutomlDataset
Represents one Dataset in automl.googleapis.com.
dataset
string
Full resource name of automl Dataset. Format: projects/{project}/locations/{location}/datasets/{dataset}
.
dataset_display_name
string
The Dataset's display name in automl.googleapis.com.
AutomlModel
Represents one Model in automl.googleapis.com.
model
string
Full resource name of automl Model. Format: projects/{project}/locations/{location}/models/{model}
.
model_display_name
string
The Model's display name in automl.googleapis.com.
DataLabelingDataset
Represents one Dataset in datalabeling.googleapis.com.
dataset
string
Full resource name of data labeling Dataset. Format: projects/{project}/datasets/{dataset}
.
dataset_display_name
string
The Dataset's display name in datalabeling.googleapis.com.
The migratable AnnotatedDataset in datalabeling.googleapis.com belongs to the data labeling Dataset.
DataLabelingAnnotatedDataset
Represents one AnnotatedDataset in datalabeling.googleapis.com.
annotated_dataset
string
Full resource name of data labeling AnnotatedDataset. Format: projects/{project}/datasets/{dataset}/annotatedDatasets/{annotated_dataset}
.
annotated_dataset_display_name
string
The AnnotatedDataset's display name in datalabeling.googleapis.com.
MlEngineModelVersion
Represents one model Version in ml.googleapis.com.
endpoint
string
The ml.googleapis.com endpoint that this model Version currently lives in. Example values:
- ml.googleapis.com
- us-centrall-ml.googleapis.com
- europe-west4-ml.googleapis.com
- asia-east1-ml.googleapis.com
version
string
Full resource name of ml engine model Version. Format: projects/{project}/models/{model}/versions/{version}
.
MigrateResourceRequest
Config of migrating one resource from automl.googleapis.com, datalabeling.googleapis.com and ml.googleapis.com to Vertex AI.
Union field request
.
request
can be only one of the following:
Config for migrating Version in ml.googleapis.com to Vertex AI's Model.
Config for migrating Model in automl.googleapis.com to Vertex AI's Model.
Config for migrating Dataset in automl.googleapis.com to Vertex AI's Dataset.
Config for migrating Dataset in datalabeling.googleapis.com to Vertex AI's Dataset.
MigrateAutomlDatasetConfig
Config for migrating Dataset in automl.googleapis.com to Vertex AI's Dataset.
dataset
string
Required. Full resource name of automl Dataset. Format: projects/{project}/locations/{location}/datasets/{dataset}
.
dataset_display_name
string
Required. Display name of the Dataset in Vertex AI. System will pick a display name if unspecified.
MigrateAutomlModelConfig
Config for migrating Model in automl.googleapis.com to Vertex AI's Model.
model
string
Required. Full resource name of automl Model. Format: projects/{project}/locations/{location}/models/{model}
.
model_display_name
string
Optional. Display name of the model in Vertex AI. System will pick a display name if unspecified.
MigrateDataLabelingDatasetConfig
Config for migrating Dataset in datalabeling.googleapis.com to Vertex AI's Dataset.
dataset
string
Required. Full resource name of data labeling Dataset. Format: projects/{project}/datasets/{dataset}
.
dataset_display_name
string
Optional. Display name of the Dataset in Vertex AI. System will pick a display name if unspecified.
Optional. Configs for migrating AnnotatedDataset in datalabeling.googleapis.com to Vertex AI's SavedQuery. The specified AnnotatedDatasets have to belong to the datalabeling Dataset.
MigrateDataLabelingAnnotatedDatasetConfig
Config for migrating AnnotatedDataset in datalabeling.googleapis.com to Vertex AI's SavedQuery.
annotated_dataset
string
Required. Full resource name of data labeling AnnotatedDataset. Format: projects/{project}/datasets/{dataset}/annotatedDatasets/{annotated_dataset}
.
MigrateMlEngineModelVersionConfig
Config for migrating version in ml.googleapis.com to Vertex AI's Model.
endpoint
string
Required. The ml.googleapis.com endpoint that this model version should be migrated from. Example values:
ml.googleapis.com
us-centrall-ml.googleapis.com
europe-west4-ml.googleapis.com
asia-east1-ml.googleapis.com
model_version
string
Required. Full resource name of ml engine model version. Format: projects/{project}/models/{model}/versions/{version}
.
model_display_name
string
Required. Display name of the model in Vertex AI. System will pick a display name if unspecified.
MigrateResourceResponse
Describes a successfully migrated resource.
Before migration, the identifier in ml.googleapis.com, automl.googleapis.com or datalabeling.googleapis.com.
migrated_resource
. After migration, the resource name in Vertex AI. migrated_resource
can be only one of the following:dataset
string
Migrated Dataset's resource name.
model
string
Migrated Model's resource name.
Model
A trained machine learning Model.
name
string
The resource name of the Model.
version_id
string
Output only. Immutable. The version ID of the model. A new version is committed when a new model version is uploaded or trained under an existing model id. It is an auto-incrementing decimal number in string representation.
version_aliases[]
string
User provided version aliases so that a model version can be referenced via alias (i.e. projects/{project}/locations/{location}/models/{model_id}@{version_alias}
instead of auto-generated version id (i.e. projects/{project}/locations/{location}/models/{model_id}@{version_id})
. The format is [a-z][a-zA-Z0-9-]{0,126}[a-z0-9] to distinguish from version_id. A default version alias will be created for the first version of the model, and there must be exactly one default version alias for a model.
Output only. Timestamp when this version was created.
Output only. Timestamp when this version was most recently updated.
display_name
string
Required. The display name of the Model. The name can be up to 128 characters long and can consist of any UTF-8 characters.
description
string
The description of the Model.
version_description
string
The description of this version.
The schemata that describe formats of the Model's predictions and explanations as given and returned via PredictionService.Predict
and PredictionService.Explain
.
metadata_schema_uri
string
Immutable. Points to a YAML file stored on Google Cloud Storage describing additional information about the Model, that is specific to it. Unset if the Model does not have any additional information. The schema is defined as an OpenAPI 3.0.2 Schema Object. AutoML Models always have this field populated by Vertex AI, if no additional metadata is needed, this field is set to an empty string. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
Immutable. An additional information about the Model; the schema of the metadata can be found in metadata_schema
. Unset if the Model does not have any additional information.
Output only. The formats in which this Model may be exported. If empty, this Model is not available for export.
training_pipeline
string
Output only. The resource name of the TrainingPipeline that uploaded this Model, if any.
pipeline_job
string
Optional. This field is populated if the model is produced by a pipeline job.
Input only. The specification of the container that is to be used when deploying this Model. The specification is ingested upon ModelService.UploadModel
, and all binaries it contains are copied and stored internally by Vertex AI. Not required for AutoML Models.
artifact_uri
string
Immutable. The path to the directory containing the Model artifact and any of its supporting files. Not required for AutoML Models.
Output only. When this Model is deployed, its prediction resources are described by the prediction_resources
field of the Endpoint.deployed_models
object. Because not all Models support all resource configuration types, the configuration types this Model supports are listed here. If no configuration types are listed, the Model cannot be deployed to an Endpoint
and does not support online predictions (PredictionService.Predict
or PredictionService.Explain
). Such a Model can serve predictions by using a BatchPredictionJob
, if it has at least one entry each in supported_input_storage_formats
and supported_output_storage_formats
.
supported_input_storage_formats[]
string
Output only. The formats this Model supports in BatchPredictionJob.input_config
. If PredictSchemata.instance_schema_uri
exists, the instances should be given as per that schema.
The possible formats are:
jsonl
The JSON Lines format, where each instance is a single line. UsesGcsSource
.csv
The CSV format, where each instance is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. UsesGcsSource
.tf-record
The TFRecord format, where each instance is a single record in tfrecord syntax. UsesGcsSource
.tf-record-gzip
Similar totf-record
, but the file is gzipped. UsesGcsSource
.bigquery
Each instance is a single row in BigQuery. UsesBigQuerySource
.file-list
Each line of the file is the location of an instance to process, usesgcs_source
field of theInputConfig
object.
If this Model doesn't support any of these formats it means it cannot be used with a BatchPredictionJob
. However, if it has supported_deployment_resources_types
, it could serve online predictions by using PredictionService.Predict
or PredictionService.Explain
.
supported_output_storage_formats[]
string
Output only. The formats this Model supports in BatchPredictionJob.output_config
. If both PredictSchemata.instance_schema_uri
and PredictSchemata.prediction_schema_uri
exist, the predictions are returned together with their instances. In other words, the prediction has the original instance data first, followed by the actual prediction content (as per the schema).
The possible formats are:
jsonl
The JSON Lines format, where each prediction is a single line. UsesGcsDestination
.csv
The CSV format, where each prediction is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. UsesGcsDestination
.bigquery
Each prediction is a single row in a BigQuery table, usesBigQueryDestination
.
If this Model doesn't support any of these formats it means it cannot be used with a BatchPredictionJob
. However, if it has supported_deployment_resources_types
, it could serve online predictions by using PredictionService.Predict
or PredictionService.Explain
.
Output only. Timestamp when this Model was uploaded into Vertex AI.
Output only. Timestamp when this Model was most recently updated.
Output only. The pointers to DeployedModels created from this Model. Note that Model could have been deployed to Endpoints in different Locations.
The default explanation specification for this Model.
The Model can be used for requesting explanation
after being deployed
if it is populated. The Model can be used for batch explanation
if it is populated.
All fields of the explanation_spec can be overridden by explanation_spec
of DeployModelRequest.deployed_model
, or explanation_spec
of BatchPredictionJob
.
If the default explanation specification is not set for this Model, this Model can still be used for requesting explanation
by setting explanation_spec
of DeployModelRequest.deployed_model
and for batch explanation
by setting explanation_spec
of BatchPredictionJob
.
etag
string
Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
labels
map<string, string>
The labels with user-defined metadata to organize your Models.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information and examples of labels.
Stats of data used for training or evaluating the Model.
Only populated when the Model is trained by a TrainingPipeline with [data_input_config][TrainingPipeline.data_input_config].
Customer-managed encryption key spec for a Model. If set, this Model and all sub-resources of this Model will be secured by this key.
Output only. Source of a model. It can either be automl training pipeline, custom training pipeline, BigQuery ML, or saved and tuned from Genie or Model Garden.
Output only. If this Model is a copy of another Model, this contains info about the original.
metadata_artifact
string
Output only. The resource name of the Artifact that was created in MetadataStore when creating the Model. The Artifact resource name pattern is projects/{project}/locations/{location}/metadataStores/{metadata_store}/artifacts/{artifact}
.
Optional. User input field to specify the base model source. Currently it only supports specifing the Model Garden models and Genie models.
satisfies_pzs
bool
Output only. Reserved for future use.
satisfies_pzi
bool
Output only. Reserved for future use.
BaseModelSource
User input field to specify the base model source. Currently it only supports specifing the Model Garden models and Genie models.
Union field source
.
source
can be only one of the following:
Source information of Model Garden models.
Information about the base model of Genie models.
DataStats
Stats of data used for train or evaluate the Model.
training_data_items_count
int64
Number of DataItems that were used for training this Model.
validation_data_items_count
int64
Number of DataItems that were used for validating this Model during training.
test_data_items_count
int64
Number of DataItems that were used for evaluating this Model. If the Model is evaluated multiple times, this will be the number of test DataItems used by the first evaluation. If the Model is not evaluated, the number is 0.
training_annotations_count
int64
Number of Annotations that are used for training this Model.
validation_annotations_count
int64
Number of Annotations that are used for validating this Model during training.
test_annotations_count
int64
Number of Annotations that are used for evaluating this Model. If the Model is evaluated multiple times, this will be the number of test Annotations used by the first evaluation. If the Model is not evaluated, the number is 0.
DeploymentResourcesType
Identifies a type of Model's prediction resources.
Enums | |
---|---|
DEPLOYMENT_RESOURCES_TYPE_UNSPECIFIED |
Should not be used. |
DEDICATED_RESOURCES |
Resources that are dedicated to the DeployedModel , and that need a higher degree of manual configuration. |
AUTOMATIC_RESOURCES |
Resources that to large degree are decided by Vertex AI, and require only a modest additional configuration. |
SHARED_RESOURCES |
Resources that can be shared by multiple DeployedModels . A pre-configured DeploymentResourcePool is required. |
ExportFormat
Represents export format supported by the Model. All formats export to Google Cloud Storage.
id
string
Output only. The ID of the export format. The possible format IDs are:
tflite
Used for Android mobile devices.edgetpu-tflite
Used for Edge TPU devices.tf-saved-model
A tensorflow model in SavedModel format.tf-js
A TensorFlow.js model that can be used in the browser and in Node.js using JavaScript.core-ml
Used for iOS mobile devices.custom-trained
A Model that was uploaded or trained by custom code.
Output only. The content of this Model that may be exported.
ExportableContent
The Model content that can be exported.
Enums | |
---|---|
EXPORTABLE_CONTENT_UNSPECIFIED |
Should not be used. |
ARTIFACT |
Model artifact and any of its supported files. Will be exported to the location specified by the artifactDestination field of the ExportModelRequest.output_config object. |
IMAGE |
The container image that is to be used when deploying this Model. Will be exported to the location specified by the imageDestination field of the ExportModelRequest.output_config object. |
OriginalModelInfo
Contains information about the original Model if this Model is a copy.
model
string
Output only. The resource name of the Model this Model is a copy of, including the revision. Format: projects/{project}/locations/{location}/models/{model_id}@{version_id}
ModelContainerSpec
Specification of a container for serving predictions. Some fields in this message correspond to fields in the Kubernetes Container v1 core specification.
image_uri
string
Required. Immutable. URI of the Docker image to be used as the custom container for serving predictions. This URI must identify an image in Artifact Registry or Container Registry. Learn more about the container publishing requirements, including permissions requirements for the Vertex AI Service Agent.
The container image is ingested upon ModelService.UploadModel
, stored internally, and this original path is afterwards not used.
To learn about the requirements for the Docker image itself, see Custom container requirements.
You can use the URI to one of Vertex AI's pre-built container images for prediction in this field.
command[]
string
Immutable. Specifies the command that runs when the container starts. This overrides the container's ENTRYPOINT. Specify this field as an array of executable and arguments, similar to a Docker ENTRYPOINT
's "exec" form, not its "shell" form.
If you do not specify this field, then the container's ENTRYPOINT
runs, in conjunction with the args
field or the container's CMD
, if either exists. If this field is not specified and the container does not have an ENTRYPOINT
, then refer to the Docker documentation about how CMD
and ENTRYPOINT
interact.
If you specify this field, then you can also specify the args
field to provide additional arguments for this command. However, if you specify this field, then the container's CMD
is ignored. See the Kubernetes documentation about how the command
and args
fields interact with a container's ENTRYPOINT
and CMD
.
In this field, you can reference environment variables set by Vertex AI and environment variables set in the env
field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax:
$(VARIABLE_NAME)
Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with $$
; for example:
$$(VARIABLE_NAME)
This field corresponds to the command
field of the Kubernetes Containers v1 core API.
args[]
string
Immutable. Specifies arguments for the command that runs when the container starts. This overrides the container's CMD
. Specify this field as an array of executable and arguments, similar to a Docker CMD
's "default parameters" form.
If you don't specify this field but do specify the command
field, then the command from the command
field runs without any additional arguments. See the Kubernetes documentation about how the command
and args
fields interact with a container's ENTRYPOINT
and CMD
.
If you don't specify this field and don't specify the command
field, then the container's ENTRYPOINT
and CMD
determine what runs based on their default behavior. See the Docker documentation about how CMD
and ENTRYPOINT
interact.
In this field, you can reference environment variables set by Vertex AI and environment variables set in the env
field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax:
$(VARIABLE_NAME)
Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with $$
; for example:
$$(VARIABLE_NAME)
This field corresponds to the args
field of the Kubernetes Containers v1 core API.
Immutable. List of environment variables to set in the container. After the container starts running, code running in the container can read these environment variables.
Additionally, the command
and args
fields can reference these variables. Later entries in this list can also reference earlier entries. For example, the following example sets the variable VAR_2
to have the value foo bar
:
[
{
"name": "VAR_1",
"value": "foo"
},
{
"name": "VAR_2",
"value": "$(VAR_1) bar"
}
]
If you switch the order of the variables in the example, then the expansion does not occur.
This field corresponds to the env
field of the Kubernetes Containers v1 core API.
Immutable. List of ports to expose from the container. Vertex AI sends any prediction requests that it receives to the first port on this list. Vertex AI also sends liveness and health checks to this port.
If you do not specify this field, it defaults to following value:
[
{
"containerPort": 8080
}
]
Vertex AI does not use ports other than the first one listed. This field corresponds to the ports
field of the Kubernetes Containers v1 core API.
predict_route
string
Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict
to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response.
For example, if you set this field to /foo
, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the /foo
path on the port of your container specified by the first value of this ModelContainerSpec
's ports
field.
If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint
:
/v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict
The placeholders in this value are replaced as follows:
ENDPOINT: The last segment (following
endpoints/
)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as theAIP_ENDPOINT_ID
environment variable.)DEPLOYED_MODEL:
DeployedModel.id
of theDeployedModel
. (Vertex AI makes this value available to your container code as theAIP_DEPLOYED_MODEL_ID
environment variable.)
health_route
string
Immutable. HTTP path on the container to send health checks to. Vertex AI intermittently sends GET requests to this path on the container's IP address and port to check that the container is healthy. Read more about health checks.
For example, if you set this field to /bar
, then Vertex AI intermittently sends a GET request to the /bar
path on the port of your container specified by the first value of this ModelContainerSpec
's ports
field.
If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint
:
/v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict
The placeholders in this value are replaced as follows:
ENDPOINT: The last segment (following
endpoints/
)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as theAIP_ENDPOINT_ID
environment variable.)DEPLOYED_MODEL:
DeployedModel.id
of theDeployedModel
. (Vertex AI makes this value available to your container code as theAIP_DEPLOYED_MODEL_ID
environment variable.)
Immutable. List of ports to expose from the container. Vertex AI sends gRPC prediction requests that it receives to the first port on this list. Vertex AI also sends liveness and health checks to this port.
If you do not specify this field, gRPC requests to the container will be disabled.
Vertex AI does not use ports other than the first one listed. This field corresponds to the ports
field of the Kubernetes Containers v1 core API.
Immutable. Deployment timeout. Limit for deployment timeout is 2 hours.
Immutable. Specification for Kubernetes startup probe.
Immutable. Specification for Kubernetes readiness probe.
ModelDeploymentMonitoringBigQueryTable
ModelDeploymentMonitoringBigQueryTable specifies the BigQuery table name as well as some information of the logs stored in this table.
The source of log.
The type of log.
bigquery_table_path
string
The created BigQuery table to store logs. Customer could do their own query & analysis. Format: bq://<project_id>.model_deployment_monitoring_<endpoint_id>.<tolower(log_source)>_<tolower(log_type)>
request_response_logging_schema_version
string
Output only. The schema version of the request/response logging BigQuery table. Default to v1 if unset.
LogSource
Indicates where does the log come from.
Enums | |
---|---|
LOG_SOURCE_UNSPECIFIED |
Unspecified source. |
TRAINING |
Logs coming from Training dataset. |
SERVING |
Logs coming from Serving traffic. |
LogType
Indicates what type of traffic does the log belong to.
Enums | |
---|---|
LOG_TYPE_UNSPECIFIED |
Unspecified type. |
PREDICT |
Predict logs. |
EXPLAIN |
Explain logs. |
ModelDeploymentMonitoringJob
Represents a job that runs periodically to monitor the deployed models in an endpoint. It will analyze the logged training & prediction data to detect any abnormal behaviors.
name
string
Output only. Resource name of a ModelDeploymentMonitoringJob.
display_name
string
Required. The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.
endpoint
string
Required. Endpoint resource name. Format: projects/{project}/locations/{location}/endpoints/{endpoint}
Output only. The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.
Output only. Schedule state when the monitoring job is in Running state.
Output only. Latest triggered monitoring pipeline metadata.
Required. The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately.
Required. Schedule config for running the monitoring job.
Required. Sample Strategy for logging.
Alert config for model monitoring.
predict_instance_schema_uri
string
YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
Sample Predict instance, same format as PredictRequest.instances
, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri
. If not set, we will generate predict schema from collected predict requests.
analysis_instance_schema_uri
string
YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze.
If this field is empty, all the feature data types are inferred from predict_instance_schema_uri
, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.
Output only. The created bigquery tables for the job under customer project. Customer could do their own query & analysis. There could be 4 log tables in maximum: 1. Training data logging predict request/response 2. Serving data logging predict request/response
The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.
labels
map<string, string>
The labels with user-defined metadata to organize your ModelDeploymentMonitoringJob.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information and examples of labels.
Output only. Timestamp when this ModelDeploymentMonitoringJob was created.
Output only. Timestamp when this ModelDeploymentMonitoringJob was updated most recently.
Output only. Timestamp when this monitoring pipeline will be scheduled to run for the next round.
Stats anomalies base folder path.
Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.
enable_monitoring_pipeline_logs
bool
If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing.
Output only. Only populated when the job's state is JOB_STATE_FAILED
or JOB_STATE_CANCELLED
.
satisfies_pzs
bool
Output only. Reserved for future use.
satisfies_pzi
bool
Output only. Reserved for future use.
LatestMonitoringPipelineMetadata
MonitoringScheduleState
The state to Specify the monitoring pipeline.
Enums | |
---|---|
MONITORING_SCHEDULE_STATE_UNSPECIFIED |
Unspecified state. |
PENDING |
The pipeline is picked up and wait to run. |
OFFLINE |
The pipeline is offline and will be scheduled for next run. |
RUNNING |
The pipeline is running. |
ModelDeploymentMonitoringObjectiveConfig
ModelDeploymentMonitoringObjectiveConfig contains the pair of deployed_model_id to ModelMonitoringObjectiveConfig.
deployed_model_id
string
The DeployedModel ID of the objective config.
The objective config of for the modelmonitoring job of this deployed model.
ModelDeploymentMonitoringObjectiveType
The Model Monitoring Objective types.
Enums | |
---|---|
MODEL_DEPLOYMENT_MONITORING_OBJECTIVE_TYPE_UNSPECIFIED |
Default value, should not be set. |
RAW_FEATURE_SKEW |
Raw feature values' stats to detect skew between Training-Prediction datasets. |
RAW_FEATURE_DRIFT |
Raw feature values' stats to detect drift between Serving-Prediction datasets. |
FEATURE_ATTRIBUTION_SKEW |
Feature attribution scores to detect skew between Training-Prediction datasets. |
FEATURE_ATTRIBUTION_DRIFT |
Feature attribution scores to detect skew between Prediction datasets collected within different time windows. |
ModelDeploymentMonitoringScheduleConfig
The config for scheduling monitoring job.
Required. The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval
will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.
ModelEvaluation
A collection of metrics calculated by comparing Model's predictions on all of the test data against annotations from the test data.
name
string
Output only. The resource name of the ModelEvaluation.
display_name
string
The display name of the ModelEvaluation.
metrics_schema_uri
string
Points to a YAML file stored on Google Cloud Storage describing the metrics
of this ModelEvaluation. The schema is defined as an OpenAPI 3.0.2 Schema Object.
Evaluation metrics of the Model. The schema of the metrics is stored in metrics_schema_uri
Output only. Timestamp when this ModelEvaluation was created.
slice_dimensions[]
string
All possible dimensions
of ModelEvaluationSlices. The dimensions can be used as the filter of the ModelService.ListModelEvaluationSlices
request, in the form of slice.dimension = <dimension>
.
data_item_schema_uri
string
Points to a YAML file stored on Google Cloud Storage describing [EvaluatedDataItemView.data_item_payload][] and EvaluatedAnnotation.data_item_payload
. The schema is defined as an OpenAPI 3.0.2 Schema Object.
This field is not populated if there are neither EvaluatedDataItemViews nor EvaluatedAnnotations under this ModelEvaluation.
annotation_schema_uri
string
Points to a YAML file stored on Google Cloud Storage describing [EvaluatedDataItemView.predictions][], [EvaluatedDataItemView.ground_truths][], EvaluatedAnnotation.predictions
, and EvaluatedAnnotation.ground_truths
. The schema is defined as an OpenAPI 3.0.2 Schema Object.
This field is not populated if there are neither EvaluatedDataItemViews nor EvaluatedAnnotations under this ModelEvaluation.
Aggregated explanation metrics for the Model's prediction output over the data this ModelEvaluation uses. This field is populated only if the Model is evaluated with explanations, and only for AutoML tabular Models.
Describes the values of ExplanationSpec
that are used for explaining the predicted values on the evaluated data.
The metadata of the ModelEvaluation. For the ModelEvaluation uploaded from Managed Pipeline, metadata contains a structured value with keys of "pipeline_job_id", "evaluation_dataset_type", "evaluation_dataset_path", "row_based_metrics_path".
ModelEvaluationExplanationSpec
explanation_type
string
Explanation type.
For AutoML Image Classification models, possible values are:
image-integrated-gradients
image-xrai
Explanation spec details.
ModelEvaluationSlice
A collection of metrics calculated by comparing Model's predictions on a slice of the test data against ground truth annotations.
name
string
Output only. The resource name of the ModelEvaluationSlice.
Output only. The slice of the test data that is used to evaluate the Model.
metrics_schema_uri
string
Output only. Points to a YAML file stored on Google Cloud Storage describing the metrics
of this ModelEvaluationSlice. The schema is defined as an OpenAPI 3.0.2 Schema Object.
Output only. Sliced evaluation metrics of the Model. The schema of the metrics is stored in metrics_schema_uri
Output only. Timestamp when this ModelEvaluationSlice was created.
Output only. Aggregated explanation metrics for the Model's prediction output over the data this ModelEvaluation uses. This field is populated only if the Model is evaluated with explanations, and only for tabular Models.
Slice
Definition of a slice.
dimension
string
Output only. The dimension of the slice. Well-known dimensions are: * annotationSpec
: This slice is on the test data that has either ground truth or prediction with AnnotationSpec.display_name
equals to value
. * slice
: This slice is a user customized slice defined by its SliceSpec.
value
string
Output only. The value of the dimension in this slice.
Output only. Specification for how the data was sliced.
SliceSpec
Specification for how the data should be sliced.
Mapping configuration for this SliceSpec. The key is the name of the feature. By default, the key will be prefixed by "instance" as a dictionary prefix for Vertex Batch Predictions output format.
Range
A range of values for slice(s). low
is inclusive, high
is exclusive.
low
float
Inclusive low value for the range.
high
float
Exclusive high value for the range.
SliceConfig
Specification message containing the config for this SliceSpec. When kind
is selected as value
and/or range
, only a single slice will be computed. When all_values
is present, a separate slice will be computed for each possible label/value for the corresponding key in config
. Examples, with feature zip_code with values 12345, 23334, 88888 and feature country with values "US", "Canada", "Mexico" in the dataset:
Example 1:
{
"zip_code": { "value": { "float_value": 12345.0 } }
}
A single slice for any data with zip_code 12345 in the dataset.
Example 2:
{
"zip_code": { "range": { "low": 12345, "high": 20000 } }
}
A single slice containing data where the zip_codes between 12345 and 20000 For this example, data with the zip_code of 12345 will be in this slice.
Example 3:
{
"zip_code": { "range": { "low": 10000, "high": 20000 } },
"country": { "value": { "string_value": "US" } }
}
A single slice containing data where the zip_codes between 10000 and 20000 has the country "US". For this example, data with the zip_code of 12345 and country "US" will be in this slice.
Example 4:
{ "country": {"all_values": { "value": true } } }
Three slices are computed, one for each unique country in the dataset.
Example 5:
{
"country": { "all_values": { "value": true } },
"zip_code": { "value": { "float_value": 12345.0 } }
}
Three slices are computed, one for each unique country in the dataset where the zip_code is also 12345. For this example, data with zip_code 12345 and country "US" will be in one slice, zip_code 12345 and country "Canada" in another slice, and zip_code 12345 and country "Mexico" in another slice, totaling 3 slices.
Union field kind
.
kind
can be only one of the following:
A unique specific value for a given feature. Example: { "value": { "string_value": "12345" } }
A range of values for a numerical feature. Example: {"range":{"low":10000.0,"high":50000.0}}
will capture 12345 and 23334 in the slice.
If all_values is set to true, then all possible labels of the keyed feature will have another slice computed. Example: {"all_values":{"value":true}}
Value
Single value that supports strings and floats.
Union field kind
.
kind
can be only one of the following:
string_value
string
String type.
float_value
float
Float type.
ModelExplanation
Aggregated explanation metrics for a Model over a set of instances.
Output only. Aggregated attributions explaining the Model's prediction outputs over the set of instances. The attributions are grouped by outputs.
For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index
can be used to identify which output this attribution is explaining.
The baselineOutputValue
, instanceOutputValue
and featureAttributions
fields are averaged over the test data.
NOTE: Currently AutoML tabular classification Models produce only one attribution, which averages attributions over all the classes it predicts. Attribution.approximation_error
is not populated.
ModelGardenSource
Contains information about the source of the models generated from Model Garden.
public_model_name
string
Required. The model garden source model resource name.
ModelMonitoringAlertConfig
The alert config for model monitoring.
enable_logging
bool
Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto [google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry][]. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.
notification_channels[]
string
Resource names of the NotificationChannels to send alert. Must be of the format projects/<project_id_or_number>/notificationChannels/<channel_id>
Union field alert
.
alert
can be only one of the following:
Email alert config.
EmailAlertConfig
The config for email alert.
user_emails[]
string
The email addresses to send the alert.
ModelMonitoringObjectiveConfig
The objective configuration for model monitoring, including the information needed to detect anomalies for one particular model.
Training dataset for models. This field has to be set only if TrainingPredictionSkewDetectionConfig is specified.
The config for skew between training data and prediction data.
The config for drift of prediction data.
The config for integrating with Vertex Explainable AI.
ExplanationConfig
The config for integrating with Vertex Explainable AI. Only applicable if the Model has explanation_spec populated.
enable_feature_attributes
bool
If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
Predictions generated by the BatchPredictionJob using baseline dataset.
ExplanationBaseline
Output from BatchPredictionJob
for Model Monitoring baseline dataset, which can be used to generate baseline attribution scores.
The storage format of the predictions generated BatchPrediction job.
destination
. The configuration specifying of BatchExplain job output. This can be used to generate the baseline of feature attribution scores. destination
can be only one of the following:Cloud Storage location for BatchExplain output.
BigQuery location for BatchExplain output.
PredictionFormat
The storage format of the predictions generated BatchPrediction job.
Enums | |
---|---|
PREDICTION_FORMAT_UNSPECIFIED |
Should not be set. |
JSONL |
Predictions are in JSONL files. |
BIGQUERY |
Predictions are in BigQuery. |
PredictionDriftDetectionConfig
The config for Prediction data drift detection.
Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.
Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.
Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
TrainingDataset
Training Dataset information.
data_format
string
Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are:
"tf-record" The source file is a TFRecord file.
"csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.
target_field
string
The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.
Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
Union field data_source
.
data_source
can be only one of the following:
dataset
string
The resource name of the Dataset used to train this Model.
The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
The BigQuery table of the unmanaged Dataset used to train this Model.
TrainingPredictionSkewDetectionConfig
The config for Training & Prediction data skew detection. It specifies the training dataset sources and the skew detection parameters.
Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.
Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.
Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
ModelMonitoringStatsAnomalies
Statistics and anomalies generated by Model Monitoring.
Model Monitoring Objective those stats and anomalies belonging to.
deployed_model_id
string
Deployed Model ID.
anomaly_count
int32
Number of anomalies within all stats.
A list of historical Stats and Anomalies generated for all Features.
FeatureHistoricStatsAnomalies
Historical Stats (and Anomalies) for a specific Feature.
feature_display_name
string
Display Name of the Feature.
Threshold for anomaly detection.
Stats calculated for the Training Dataset.
A list of historical stats generated by different time window's Prediction Dataset.
ModelSourceInfo
Detail description of the source information of the model.
Type of the model source.
ModelSourceType
Source of the model. Different from objective
field, this ModelSourceType
enum indicates the source from which the model was accessed or obtained, whereas the objective
indicates the overall aim or function of this model.
Enums | |
---|---|
MODEL_SOURCE_TYPE_UNSPECIFIED |
Should not be used. |
AUTOML |
The Model is uploaded by automl training pipeline. |
CUSTOM |
The Model is uploaded by user or custom training pipeline. |
BQML |
The Model is registered and sync'ed from BigQuery ML. |
MODEL_GARDEN |
The Model is saved or tuned from Model Garden. |
CUSTOM_TEXT_EMBEDDING |
The Model is uploaded by text embedding finetuning pipeline. |
MARKETPLACE |
The Model is saved or tuned from Marketplace. |
MutateDeployedIndexOperationMetadata
Runtime operation information for IndexEndpointService.MutateDeployedIndex
.
The operation generic information.
deployed_index_id
string
The unique index id specified by user
MutateDeployedIndexRequest
Request message for IndexEndpointService.MutateDeployedIndex
.
index_endpoint
string
Required. The name of the IndexEndpoint resource into which to deploy an Index. Format: projects/{project}/locations/{location}/indexEndpoints/{index_endpoint}
Required. The DeployedIndex to be updated within the IndexEndpoint. Currently, the updatable fields are [DeployedIndex][automatic_resources] and [DeployedIndex][dedicated_resources]
MutateDeployedIndexResponse
Response message for IndexEndpointService.MutateDeployedIndex
.
The DeployedIndex that had been updated in the IndexEndpoint.
MutateDeployedModelOperationMetadata
Runtime operation information for EndpointService.MutateDeployedModel
.
The operation generic information.
MutateDeployedModelRequest
Request message for EndpointService.MutateDeployedModel
.
endpoint
string
Required. The name of the Endpoint resource into which to mutate a DeployedModel. Format: projects/{project}/locations/{location}/endpoints/{endpoint}
Required. The DeployedModel to be mutated within the Endpoint. Only the following fields can be mutated:
min_replica_count
in eitherDedicatedResources
orAutomaticResources
max_replica_count
in eitherDedicatedResources
orAutomaticResources
autoscaling_metric_specs
disable_container_logging
(v1 only)enable_container_logging
(v1beta1 only)
Required. The update mask applies to the resource. See google.protobuf.FieldMask
.
MutateDeployedModelResponse
Response message for EndpointService.MutateDeployedModel
.
The DeployedModel that's being mutated.
NasJob
Represents a Neural Architecture Search (NAS) job.
name
string
Output only. Resource name of the NasJob.
display_name
string
Required. The display name of the NasJob. The name can be up to 128 characters long and can consist of any UTF-8 characters.
Required. The specification of a NasJob.
Output only. Output of the NasJob.
Output only. The detailed state of the job.
Output only. Time when the NasJob was created.
Output only. Time when the NasJob for the first time entered the JOB_STATE_RUNNING
state.
Output only. Time when the NasJob entered any of the following states: JOB_STATE_SUCCEEDED
, JOB_STATE_FAILED
, JOB_STATE_CANCELLED
.
Output only. Time when the NasJob was most recently updated.
Output only. Only populated when job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.
labels
map<string, string>
The labels with user-defined metadata to organize NasJobs.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information and examples of labels.
Customer-managed encryption key options for a NasJob. If this is set, then all resources created by the NasJob will be encrypted with the provided encryption key.
enable_restricted_image_training
(deprecated)
bool
Optional. Enable a separation of Custom model training and restricted image training for tenant project.
satisfies_pzs
bool
Output only. Reserved for future use.
satisfies_pzi
bool
Output only. Reserved for future use.
NasJobOutput
Represents a uCAIP NasJob output.
output
. The output of this Neural Architecture Search (NAS) job. output
can be only one of the following:Output only. The output of this multi-trial Neural Architecture Search (NAS) job.
MultiTrialJobOutput
NasJobSpec
Represents the spec of a NasJob.
resume_nas_job_id
string
The ID of the existing NasJob in the same Project and Location which will be used to resume search. search_space_spec and nas_algorithm_spec are obtained from previous NasJob hence should not provide them again for this NasJob.
search_space_spec
string
It defines the search space for Neural Architecture Search (NAS).
nas_algorithm_spec
. The Neural Architecture Search (NAS) algorithm specification. nas_algorithm_spec
can be only one of the following:The spec of multi-trial algorithms.
MultiTrialAlgorithmSpec
The spec of multi-trial Neural Architecture Search (NAS).
The multi-trial Neural Architecture Search (NAS) algorithm type. Defaults to REINFORCEMENT_LEARNING
.
Metric specs for the NAS job. Validation for this field is done at multi_trial_algorithm_spec
field.
Required. Spec for search trials.
Spec for train trials. Top N [TrainTrialSpec.max_parallel_trial_count] search trials will be trained for every M [TrainTrialSpec.frequency] trials searched.
MetricSpec
Represents a metric to optimize.
metric_id
string
Required. The ID of the metric. Must not contain whitespaces.
Required. The optimization goal of the metric.
GoalType
The available types of optimization goals.
Enums | |
---|---|
GOAL_TYPE_UNSPECIFIED |
Goal Type will default to maximize. |
MAXIMIZE |
Maximize the goal metric. |
MINIMIZE |
Minimize the goal metric. |
MultiTrialAlgorithm
The available types of multi-trial algorithms.
Enums | |
---|---|
MULTI_TRIAL_ALGORITHM_UNSPECIFIED |
Defaults to REINFORCEMENT_LEARNING . |
REINFORCEMENT_LEARNING |
The Reinforcement Learning Algorithm for Multi-trial Neural Architecture Search (NAS). |
GRID_SEARCH |
The Grid Search Algorithm for Multi-trial Neural Architecture Search (NAS). |
SearchTrialSpec
Represent spec for search trials.
Required. The spec of a search trial job. The same spec applies to all search trials.
max_trial_count
int32
Required. The maximum number of Neural Architecture Search (NAS) trials to run.
max_parallel_trial_count
int32
Required. The maximum number of trials to run in parallel.
max_failed_trial_count
int32
The number of failed trials that need to be seen before failing the NasJob.
If set to 0, Vertex AI decides how many trials must fail before the whole job fails.
TrainTrialSpec
Represent spec for train trials.
Required. The spec of a train trial job. The same spec applies to all train trials.
max_parallel_trial_count
int32
Required. The maximum number of trials to run in parallel.
frequency
int32
Required. Frequency of search trials to start train stage. Top N [TrainTrialSpec.max_parallel_trial_count] search trials will be trained for every M [TrainTrialSpec.frequency] trials searched.
NasTrial
Represents a uCAIP NasJob trial.
id
string
Output only. The identifier of the NasTrial assigned by the service.
Output only. The detailed state of the NasTrial.
Output only. The final measurement containing the objective value.
Output only. Time when the NasTrial was started.
Output only. Time when the NasTrial's status changed to SUCCEEDED
or INFEASIBLE
.
State
Describes a NasTrial state.
Enums | |
---|---|
STATE_UNSPECIFIED |
The NasTrial state is unspecified. |
REQUESTED |
Indicates that a specific NasTrial has been requested, but it has not yet been suggested by the service. |
ACTIVE |
Indicates that the NasTrial has been suggested. |
STOPPING |
Indicates that the NasTrial should stop according to the service. |
SUCCEEDED |
Indicates that the NasTrial is completed successfully. |
INFEASIBLE |
Indicates that the NasTrial should not be attempted again. The service will set a NasTrial to INFEASIBLE when it's done but missing the final_measurement. |
NasTrialDetail
Represents a NasTrial details along with its parameters. If there is a corresponding train NasTrial, the train NasTrial is also returned.
name
string
Output only. Resource name of the NasTrialDetail.
parameters
string
The parameters for the NasJob NasTrial.
The requested search NasTrial.
The train NasTrial corresponding to search_trial
. Only populated if search_trial
is used for training.
NearestNeighborQuery
A query to find a number of similar entities.
neighbor_count
int32
Optional. The number of similar entities to be retrieved from feature view for each query.
Optional. The list of string filters.
Optional. The list of numeric filters.
per_crowding_attribute_neighbor_count
int32
Optional. Crowding is a constraint on a neighbor list produced by nearest neighbor search requiring that no more than sper_crowding_attribute_neighbor_count of the k neighbors returned have the same value of crowding_attribute. It's used for improving result diversity.
Optional. Parameters that can be set to tune query on the fly.
Union field instance
.
instance
can be only one of the following:
entity_id
string
Optional. The entity id whose similar entities should be searched for. If embedding is set, search will use embedding instead of entity_id.
Optional. The embedding vector that be used for similar search.
Embedding
The embedding vector.
value[]
float
Optional. Individual value in the embedding.
NumericFilter
Numeric filter is used to search a subset of the entities by using boolean rules on numeric columns. For example: Database Point 0: {name: "a" value_int: 42} {name: "b" value_float: 1.0} Database Point 1: {name: "a" value_int: 10} {name: "b" value_float: 2.0} Database Point 2: {name: "a" value_int: -1} {name: "b" value_float: 3.0} Query: {name: "a" value_int: 12 operator: LESS} // Matches Point 1, 2 {name: "b" value_float: 2.0 operator: EQUAL} // Matches Point 1
name
string
Required. Column name in BigQuery that used as filters.
Value
. The type of Value must be consistent for all datapoints with a given name. This is verified at runtime. Value
can be only one of the following:value_int
int64
int value type.
value_float
float
float value type.
value_double
double
double value type.
Optional. This MUST be specified for queries and must NOT be specified for database points.
Operator
Datapoints for which Operator is true relative to the query's Value field will be allowlisted.
Enums | |
---|---|
OPERATOR_UNSPECIFIED |
Unspecified operator. |
LESS |
Entities are eligible if their value is < the query's. |
LESS_EQUAL |
Entities are eligible if their value is <= the query's. |
EQUAL |
Entities are eligible if their value is == the query's. |
GREATER_EQUAL |
Entities are eligible if their value is >= the query's. |
GREATER |
Entities are eligible if their value is > the query's. |
NOT_EQUAL |
Entities are eligible if their value is != the query's. |
Parameters
Parameters that can be overrided in each query to tune query latency and recall.
approximate_neighbor_candidates
int32
Optional. The number of neighbors to find via approximate search before exact reordering is performed; if set, this value must be > neighbor_count.
leaf_nodes_search_fraction
double
Optional. The fraction of the number of leaves to search, set at query time allows user to tune search performance. This value increase result in both search accuracy and latency increase. The value should be between 0.0 and 1.0.
StringFilter
String filter is used to search a subset of the entities by using boolean rules on string columns. For example: if a query specifies string filter with 'name = color, allow_tokens = {red, blue}, deny_tokens = {purple}',' then that query will match entities that are red or blue, but if those points are also purple, then they will be excluded even if they are red/blue. Only string filter is supported for now, numeric filter will be supported in the near future.
name
string
Required. Column names in BigQuery that used as filters.
allow_tokens[]
string
Optional. The allowed tokens.
deny_tokens[]
string
Optional. The denied tokens.
NearestNeighborSearchOperationMetadata
Runtime operation metadata with regard to Matching Engine Index.
The validation stats of the content (per file) to be inserted or updated on the Matching Engine Index resource. Populated if contentsDeltaUri is provided as part of Index.metadata
. Please note that, currently for those files that are broken or has unsupported file format, we will not have the stats for those files.
data_bytes_count
int64
The ingested data size in bytes.
ContentValidationStats
source_gcs_uri
string
Cloud Storage URI pointing to the original file in user's bucket.
valid_record_count
int64
Number of records in this file that were successfully processed.
invalid_record_count
int64
Number of records in this file we skipped due to validate errors.
The detail information of the partial failures encountered for those invalid records that couldn't be parsed. Up to 50 partial errors will be reported.
valid_sparse_record_count
int64
Number of sparse records in this file that were successfully processed.
invalid_sparse_record_count
int64
Number of sparse records in this file we skipped due to validate errors.
RecordError
The error type of this record.
error_message
string
A human-readable message that is shown to the user to help them fix the error. Note that this message may change from time to time, your code should check against error_type as the source of truth.
source_gcs_uri
string
Cloud Storage URI pointing to the original file in user's bucket.
embedding_id
string
Empty if the embedding id is failed to parse.
raw_record
string
The original content of this record.
RecordErrorType
Enums | |
---|---|
ERROR_TYPE_UNSPECIFIED |
Default, shall not be used. |
EMPTY_LINE |
The record is empty. |
INVALID_JSON_SYNTAX |
Invalid json format. |
INVALID_CSV_SYNTAX |
Invalid csv format. |
INVALID_AVRO_SYNTAX |
Invalid avro format. |
INVALID_EMBEDDING_ID |
The embedding id is not valid. |
EMBEDDING_SIZE_MISMATCH |
The size of the dense embedding vectors does not match with the specified dimension. |
NAMESPACE_MISSING |
The namespace field is missing. |
PARSING_ERROR |
Generic catch-all error. Only used for validation failure where the root cause cannot be easily retrieved programmatically. |
DUPLICATE_NAMESPACE |
There are multiple restricts with the same namespace value. |
OP_IN_DATAPOINT |
Numeric restrict has operator specified in datapoint. |
MULTIPLE_VALUES |
Numeric restrict has multiple values specified. |
INVALID_NUMERIC_VALUE |
Numeric restrict has invalid numeric value specified. |
INVALID_ENCODING |
File is not in UTF_8 format. |
INVALID_SPARSE_DIMENSIONS |
Error parsing sparse dimensions field. |
INVALID_TOKEN_VALUE |
Token restrict value is invalid. |
INVALID_SPARSE_EMBEDDING |
Invalid sparse embedding. |
INVALID_EMBEDDING |
Invalid dense embedding. |
NearestNeighbors
Nearest neighbors for one query.
All its neighbors.
Neighbor
A neighbor of the query vector.
entity_id
string
The id of the similar entity.
distance
double
The distance between the neighbor and the query vector.
The attributes of the neighbor, e.g. filters, crowding and metadata Note that full entities are returned only when "return_full_entity" is set to true. Otherwise, only the "entity_id" and "distance" fields are populated.
Neighbor
Neighbors for example-based explanations.
neighbor_id
string
Output only. The neighbor id.
neighbor_distance
double
Output only. The neighbor distance.
NetworkSpec
Network spec.
enable_internet_access
bool
Whether to enable public internet access. Default false.
network
string
The full name of the Google Compute Engine network
subnetwork
string
The name of the subnet that this instance is in. Format: projects/{project_id_or_number}/regions/{region}/subnetworks/{subnetwork_id}
NfsMount
Represents a mount configuration for Network File System (NFS) to mount.
server
string
Required. IP address of the NFS server.
path
string
Required. Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of server:path
mount_point
string
Required. Destination mount path. The NFS will be mounted for the user under /mnt/nfs/
NotebookEucConfig
The euc configuration of NotebookRuntimeTemplate.
euc_disabled
bool
Input only. Whether EUC is disabled in this NotebookRuntimeTemplate. In proto3, the default value of a boolean is false. In this way, by default EUC will be enabled for NotebookRuntimeTemplate.
bypass_actas_check
bool
Output only. Whether ActAs check is bypassed for service account attached to the VM. If false, we need ActAs check for the default Compute Engine Service account. When a Runtime is created, a VM is allocated using Default Compute Engine Service Account. Any user requesting to use this Runtime requires Service Account User (ActAs) permission over this SA. If true, Runtime owner is using EUC and does not require the above permission as VM no longer use default Compute Engine SA, but a P4SA.
NotebookExecutionJob
NotebookExecutionJob represents an instance of a notebook execution.
name
string
Output only. The resource name of this NotebookExecutionJob. Format: projects/{project_id}/locations/{location}/notebookExecutionJobs/{job_id}
display_name
string
The display name of the NotebookExecutionJob. The name can be up to 128 characters long and can consist of any UTF-8 characters.
Max running time of the execution job in seconds (default 86400s / 24 hrs).
schedule_resource_name
string
Output only. The Schedule resource name if this job is triggered by one. Format: projects/{project_id}/locations/{location}/schedules/{schedule_id}
Output only. The state of the NotebookExecutionJob.
Output only. Populated when the NotebookExecutionJob is completed. When there is an error during notebook execution, the error details are populated.
Output only. Timestamp when this NotebookExecutionJob was created.
Output only. Timestamp when this NotebookExecutionJob was most recently updated.
labels
map<string, string>
The labels with user-defined metadata to organize NotebookExecutionJobs.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable.
Customer-managed encryption key spec for the notebook execution job. This field is auto-populated if the [NotebookService.NotebookRuntimeTemplate][] has an encryption spec.
notebook_source
. The input notebook. notebook_source
can be only one of the following:The Dataform Repository pointing to a single file notebook repository.
The Cloud Storage url pointing to the ipynb file. Format: gs://bucket/notebook_file.ipynb
The contents of an input notebook file.
environment_spec
. The compute config to use for an execution job. environment_spec
can be only one of the following:notebook_runtime_template_resource_name
string
The NotebookRuntimeTemplate to source compute configuration from.
execution_sink
. The location to store the notebook execution result. execution_sink
can be only one of the following:gcs_output_uri
string
The Cloud Storage location to upload the result to. Format: gs://bucket-name
execution_identity
. The identity to run the execution as. execution_identity
can be only one of the following:execution_user
string
The user email to run the execution as. Only supported by Colab runtimes.
service_account
string
The service account to run the execution as.
DataformRepositorySource
The Dataform Repository containing the input notebook.
dataform_repository_resource_name
string
The resource name of the Dataform Repository. Format: projects/{project_id}/locations/{location}/repositories/{repository_id}
commit_sha
string
The commit SHA to read repository with. If unset, the file will be read at HEAD.
DirectNotebookSource
The content of the input notebook in ipynb format.
content
bytes
The base64-encoded contents of the input notebook file.
GcsNotebookSource
The Cloud Storage uri for the input notebook.
uri
string
The Cloud Storage uri pointing to the ipynb file. Format: gs://bucket/notebook_file.ipynb
generation
string
The version of the Cloud Storage object to read. If unset, the current version of the object is read. See https://cloud.google.com/storage/docs/metadata#generation-number.
NotebookExecutionJobView
Views for Get/List NotebookExecutionJob
Enums | |
---|---|
NOTEBOOK_EXECUTION_JOB_VIEW_UNSPECIFIED |
When unspecified, the API defaults to the BASIC view. |
NOTEBOOK_EXECUTION_JOB_VIEW_BASIC |
Includes all fields except for direct notebook inputs. |
NOTEBOOK_EXECUTION_JOB_VIEW_FULL |
Includes all fields. |
NotebookIdleShutdownConfig
The idle shutdown configuration of NotebookRuntimeTemplate, which contains the idle_timeout as required field.
Required. Duration is accurate to the second. In Notebook, Idle Timeout is accurate to minute so the range of idle_timeout (second) is: 10 * 60 ~ 1440 * 60.
idle_shutdown_disabled
bool
Whether Idle Shutdown is disabled in this NotebookRuntimeTemplate.
NotebookRuntime
A runtime is a virtual machine allocated to a particular user for a particular Notebook file on temporary basis with lifetime limited to 24 hours.
name
string
Output only. The resource name of the NotebookRuntime.
runtime_user
string
Required. The user email of the NotebookRuntime.
Output only. The pointer to NotebookRuntimeTemplate this NotebookRuntime is created from.
proxy_uri
string
Output only. The proxy endpoint used to access the NotebookRuntime.
Output only. Timestamp when this NotebookRuntime was created.
Output only. Timestamp when this NotebookRuntime was most recently updated.
Output only. The health state of the NotebookRuntime.
display_name
string
Required. The display name of the NotebookRuntime. The name can be up to 128 characters long and can consist of any UTF-8 characters.
description
string
The description of the NotebookRuntime.
service_account
string
Output only. The service account that the NotebookRuntime workload runs as.
Output only. The runtime (instance) state of the NotebookRuntime.
is_upgradable
bool
Output only. Whether NotebookRuntime is upgradable.
labels
map<string, string>
The labels with user-defined metadata to organize your NotebookRuntime.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one NotebookRuntime (System labels are excluded).
See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. Following system labels exist for NotebookRuntime:
- "aiplatform.googleapis.com/notebook_runtime_gce_instance_id": output only, its value is the Compute Engine instance id.
- "aiplatform.googleapis.com/colab_enterprise_entry_service": its value is either "bigquery" or "vertex"; if absent, it should be "vertex". This is to describe the entry service, either BigQuery or Vertex.
Output only. Timestamp when this NotebookRuntime will be expired: 1. System Predefined NotebookRuntime: 24 hours after creation. After expiration, system predifined runtime will be deleted. 2. User created NotebookRuntime: 6 months after last upgrade. After expiration, user created runtime will be stopped and allowed for upgrade.
version
string
Output only. The VM os image version of NotebookRuntime.
Output only. The type of the notebook runtime.
Output only. The idle shutdown configuration of the notebook runtime.
Output only. Customer-managed encryption key spec for the notebook runtime.
satisfies_pzs
bool
Output only. Reserved for future use.
satisfies_pzi
bool
Output only. Reserved for future use.
HealthState
The substate of the NotebookRuntime to display health information.
Enums | |
---|---|
HEALTH_STATE_UNSPECIFIED |
Unspecified health state. |
HEALTHY |
NotebookRuntime is in healthy state. Applies to ACTIVE state. |
UNHEALTHY |
NotebookRuntime is in unhealthy state. Applies to ACTIVE state. |
RuntimeState
The substate of the NotebookRuntime to display state of runtime. The resource of NotebookRuntime is in ACTIVE state for these sub state.
Enums | |
---|---|
RUNTIME_STATE_UNSPECIFIED |
Unspecified runtime state. |
RUNNING |
NotebookRuntime is in running state. |
BEING_STARTED |
NotebookRuntime is in starting state. |
BEING_STOPPED |
NotebookRuntime is in stopping state. |
STOPPED |
NotebookRuntime is in stopped state. |
BEING_UPGRADED |
NotebookRuntime is in upgrading state. It is in the middle of upgrading process. |
ERROR |
NotebookRuntime was unable to start/stop properly. |
INVALID |
NotebookRuntime is in invalid state. Cannot be recovered. |
NotebookRuntimeTemplate
A template that specifies runtime configurations such as machine type, runtime version, network configurations, etc. Multiple runtimes can be created from a runtime template.
name
string
The resource name of the NotebookRuntimeTemplate.
display_name
string
Required. The display name of the NotebookRuntimeTemplate. The name can be up to 128 characters long and can consist of any UTF-8 characters.
description
string
The description of the NotebookRuntimeTemplate.
is_default
bool
Output only. The default template to use if not specified.
Optional. Immutable. The specification of a single machine for the template.
Optional. The specification of [persistent disk][https://cloud.google.com/compute/docs/disks/persistent-disks] attached to the runtime as data disk storage.
Optional. Network spec.
service_account
string
The service account that the runtime workload runs as. You can use any service account within the same project, but you must have the service account user permission to use the instance.
If not specified, the Compute Engine default service account is used.
etag
string
Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
labels
map<string, string>
The labels with user-defined metadata to organize the NotebookRuntimeTemplates.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information and examples of labels.
The idle shutdown configuration of NotebookRuntimeTemplate. This config will only be set when idle shutdown is enabled.
EUC configuration of the NotebookRuntimeTemplate.
Output only. Timestamp when this NotebookRuntimeTemplate was created.
Output only. Timestamp when this NotebookRuntimeTemplate was most recently updated.
Optional. Immutable. The type of the notebook runtime template.
Optional. Immutable. Runtime Shielded VM spec.
Customer-managed encryption key spec for the notebook runtime.
NotebookRuntimeTemplateRef
Points to a NotebookRuntimeTemplateRef.
notebook_runtime_template
string
Immutable. A resource name of the NotebookRuntimeTemplate.
NotebookRuntimeType
Represents a notebook runtime type.
Enums | |
---|---|
NOTEBOOK_RUNTIME_TYPE_UNSPECIFIED |
Unspecified notebook runtime type, NotebookRuntimeType will default to USER_DEFINED. |
USER_DEFINED |
runtime or template with coustomized configurations from user. |
ONE_CLICK |
runtime or template with system defined configurations. |
PSCAutomationConfig
PSC config that is used to automatically create forwarding rule via ServiceConnectionMap.
project_id
string
Required. Project id used to create forwarding rule.
PairwiseQuestionAnsweringQualityInput
Input for pairwise question answering quality metric.
Required. Spec for pairwise question answering quality score metric.
Required. Pairwise question answering quality instance.
PairwiseQuestionAnsweringQualityInstance
Spec for pairwise question answering quality instance.
prediction
string
Required. Output of the candidate model.
baseline_prediction
string
Required. Output of the baseline model.
reference
string
Optional. Ground truth used to compare against the prediction.
context
string
Required. Text to answer the question.
instruction
string
Required. Question Answering prompt for LLM.
PairwiseQuestionAnsweringQualityResult
Spec for pairwise question answering quality result.
explanation
string
Output only. Explanation for question answering quality score.
confidence
float
Output only. Confidence for question answering quality score.
PairwiseQuestionAnsweringQualitySpec
Spec for pairwise question answering quality score metric.
use_reference
bool
Optional. Whether to use instance.reference to compute question answering quality.
version
int32
Optional. Which version to use for evaluation.
PairwiseSummarizationQualityInput
Input for pairwise summarization quality metric.
Required. Spec for pairwise summarization quality score metric.
Required. Pairwise summarization quality instance.
PairwiseSummarizationQualityInstance
Spec for pairwise summarization quality instance.
prediction
string
Required. Output of the candidate model.
baseline_prediction
string
Required. Output of the baseline model.
reference
string
Optional. Ground truth used to compare against the prediction.
context
string
Required. Text to be summarized.
instruction
string
Required. Summarization prompt for LLM.
PairwiseSummarizationQualityResult
Spec for pairwise summarization quality result.
explanation
string
Output only. Explanation for summarization quality score.
confidence
float
Output only. Confidence for summarization quality score.
PairwiseSummarizationQualitySpec
Spec for pairwise summarization quality score metric.
use_reference
bool
Optional. Whether to use instance.reference to compute pairwise summarization quality.
version
int32
Optional. Which version to use for evaluation.
Part
A datatype containing media that is part of a multi-part Content
message.
A Part
consists of data which has an associated datatype. A Part
can only contain one of the accepted types in Part.data
.
A Part
must have a fixed IANA MIME type identifying the type and subtype of the media if inline_data
or file_data
field is filled with raw bytes.
Union field data
.
data
can be only one of the following:
text
string
Optional. Text part (can be code).
Optional. Inlined bytes data.
Optional. URI based data.
Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values.
Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model.
Union field metadata
.
metadata
can be only one of the following:
Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
PauseModelDeploymentMonitoringJobRequest
Request message for JobService.PauseModelDeploymentMonitoringJob
.
name
string
Required. The resource name of the ModelDeploymentMonitoringJob to pause. Format: projects/{project}/locations/{location}/modelDeploymentMonitoringJobs/{model_deployment_monitoring_job}
PauseScheduleRequest
Request message for ScheduleService.PauseSchedule
.
name
string
Required. The name of the Schedule resource to be paused. Format: projects/{project}/locations/{location}/schedules/{schedule}
PersistentDiskSpec
Represents the spec of [persistent disk][https://cloud.google.com/compute/docs/disks/persistent-disks] options.
disk_type
string
Type of the disk (default is "pd-standard"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) "pd-standard" (Persistent Disk Hard Disk Drive) "pd-balanced" (Balanced Persistent Disk) "pd-extreme" (Extreme Persistent Disk)
disk_size_gb
int64
Size in GB of the disk (default is 100GB).
PersistentResource
Represents long-lasting resources that are dedicated to users to runs custom workloads. A PersistentResource can have multiple node pools and each node pool can have its own machine spec.
name
string
Immutable. Resource name of a PersistentResource.
display_name
string
Optional. The display name of the PersistentResource. The name can be up to 128 characters long and can consist of any UTF-8 characters.
Required. The spec of the pools of different resources.
Output only. The detailed state of a Study.
Output only. Only populated when persistent resource's state is STOPPING
or ERROR
.
Output only. Time when the PersistentResource was created.
Output only. Time when the PersistentResource for the first time entered the RUNNING
state.
Output only. Time when the PersistentResource was most recently updated.
labels
map<string, string>
Optional. The labels with user-defined metadata to organize PersistentResource.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information and examples of labels.
network
string
Optional. The full name of the Compute Engine network to peered with Vertex AI to host the persistent resources. For example, projects/12345/global/networks/myVPC
. Format is of the form projects/{project}/global/networks/{network}
. Where {project} is a project number, as in 12345
, and {network} is a network name.
To specify this field, you must have already configured VPC Network Peering for Vertex AI.
If this field is left unspecified, the resources aren't peered with any network.
Optional. Customer-managed encryption key spec for a PersistentResource. If set, this PersistentResource and all sub-resources of this PersistentResource will be secured by this key.
Optional. Persistent Resource runtime spec. For example, used for Ray cluster configuration.
Output only. Runtime information of the Persistent Resource.
reserved_ip_ranges[]
string
Optional. A list of names for the reserved IP ranges under the VPC network that can be used for this persistent resource.
If set, we will deploy the persistent resource within the provided IP ranges. Otherwise, the persistent resource is deployed to any IP ranges under the provided VPC network.
Example: ['vertex-ai-ip-range'].
satisfies_pzs
bool
Output only. Reserved for future use.
satisfies_pzi
bool
Output only. Reserved for future use.
State
Describes the PersistentResource state.
Enums | |
---|---|
STATE_UNSPECIFIED |
Not set. |
PROVISIONING |
The PROVISIONING state indicates the persistent resources is being created. |
RUNNING |
The RUNNING state indicates the persistent resource is healthy and fully usable. |
STOPPING |
The STOPPING state indicates the persistent resource is being deleted. |
ERROR |
The ERROR state indicates the persistent resource may be unusable. Details can be found in the error field. |
REBOOTING |
The REBOOTING state indicates the persistent resource is being rebooted (PR is not available right now but is expected to be ready again later). |
UPDATING |
The UPDATING state indicates the persistent resource is being updated. |
PipelineFailurePolicy
Represents the failure policy of a pipeline. Currently, the default of a pipeline is that the pipeline will continue to run until no more tasks can be executed, also known as PIPELINE_FAILURE_POLICY_FAIL_SLOW. However, if a pipeline is set to PIPELINE_FAILURE_POLICY_FAIL_FAST, it will stop scheduling any new tasks when a task has failed. Any scheduled tasks will continue to completion.
Enums | |
---|---|
PIPELINE_FAILURE_POLICY_UNSPECIFIED |
Default value, and follows fail slow behavior. |
PIPELINE_FAILURE_POLICY_FAIL_SLOW |
Indicates that the pipeline should continue to run until all possible tasks have been scheduled and completed. |
PIPELINE_FAILURE_POLICY_FAIL_FAST |
Indicates that the pipeline should stop scheduling new tasks after a task has failed. |
PipelineJob
An instance of a machine learning PipelineJob.
name
string
Output only. The resource name of the PipelineJob.
display_name
string
The display name of the Pipeline. The name can be up to 128 characters long and can consist of any UTF-8 characters.
Output only. Pipeline creation time.
Output only. Pipeline start time.
Output only. Pipeline end time.
Output only. Timestamp when this PipelineJob was most recently updated.
The spec of the pipeline.
Output only. The detailed state of the job.
Output only. The details of pipeline run. Not available in the list view.
Output only. The error that occurred during pipeline execution. Only populated when the pipeline's state is FAILED or CANCELLED.
labels
map<string, string>
The labels with user-defined metadata to organize PipelineJob.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information and examples of labels.
Note there is some reserved label key for Vertex AI Pipelines. - vertex-ai-pipelines-run-billing-id
, user set value will get overrided.
Runtime config of the pipeline.
Customer-managed encryption key spec for a pipelineJob. If set, this PipelineJob and all of its sub-resources will be secured by this key.
service_account
string
The service account that the pipeline workload runs as. If not specified, the Compute Engine default service account in the project will be used. See https://cloud.google.com/compute/docs/access/service-accounts#default_service_account
Users starting the pipeline must have the iam.serviceAccounts.actAs
permission on this service account.
network
string
The full name of the Compute Engine network to which the Pipeline Job's workload should be peered. For example, projects/12345/global/networks/myVPC
. Format is of the form projects/{project}/global/networks/{network}
. Where {project} is a project number, as in 12345
, and {network} is a network name.
Private services access must already be configured for the network. Pipeline job will apply the network configuration to the Google Cloud resources being launched, if applied, such as Vertex AI Training or Dataflow job. If left unspecified, the workload is not peered with any network.
reserved_ip_ranges[]
string
A list of names for the reserved ip ranges under the VPC network that can be used for this Pipeline Job's workload.
If set, we will deploy the Pipeline Job's workload within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network.
Example: ['vertex-ai-ip-range'].
template_uri
string
A template uri from where the PipelineJob.pipeline_spec
, if empty, will be downloaded. Currently, only uri from Vertex Template Registry & Gallery is supported. Reference to https://cloud.google.com/vertex-ai/docs/pipelines/create-pipeline-template.
Output only. Pipeline template metadata. Will fill up fields if PipelineJob.template_uri
is from supported template registry.
schedule_name
string
Output only. The schedule resource name. Only returned if the Pipeline is created by Schedule API.
preflight_validations
bool
Optional. Whether to do component level validations before job creation.
RuntimeConfig
The runtime config of a PipelineJob.
Deprecated. Use RuntimeConfig.parameter_values
instead. The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec
to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version
2.0.0 or lower, such as pipelines built using Kubeflow Pipelines SDK 1.8 or lower.
gcs_output_directory
string
Required. A path in a Cloud Storage bucket, which will be treated as the root output directory of the pipeline. It is used by the system to generate the paths of output artifacts. The artifact paths are generated with a sub-path pattern {job_id}/{task_id}/{output_key}
under the specified output directory. The service account specified in this pipeline must have the storage.objects.get
and storage.objects.create
permissions for this bucket.
The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec
to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version
2.1.0, such as pipelines built using Kubeflow Pipelines SDK 1.9 or higher and the v2 DSL.
Represents the failure policy of a pipeline. Currently, the default of a pipeline is that the pipeline will continue to run until no more tasks can be executed, also known as PIPELINE_FAILURE_POLICY_FAIL_SLOW. However, if a pipeline is set to PIPELINE_FAILURE_POLICY_FAIL_FAST, it will stop scheduling any new tasks when a task has failed. Any scheduled tasks will continue to completion.
The runtime artifacts of the PipelineJob. The key will be the input artifact name and the value would be one of the InputArtifact.
InputArtifact
The type of an input artifact.
Union field kind
.
kind
can be only one of the following:
artifact_id
string
Artifact resource id from MLMD. Which is the last portion of an artifact resource name: projects/{project}/locations/{location}/metadataStores/default/artifacts/{artifact_id}
. The artifact must stay within the same project, location and default metadatastore as the pipeline.
PipelineJobDetail
The runtime detail of PipelineJob.
Output only. The context of the pipeline.
Output only. The context of the current pipeline run.
Output only. The runtime details of the tasks under the pipeline.
PipelineState
Describes the state of a pipeline.
Enums | |
---|---|
PIPELINE_STATE_UNSPECIFIED |
The pipeline state is unspecified. |
PIPELINE_STATE_QUEUED |
The pipeline has been created or resumed, and processing has not yet begun. |
PIPELINE_STATE_PENDING |
The service is preparing to run the pipeline. |
PIPELINE_STATE_RUNNING |
The pipeline is in progress. |
PIPELINE_STATE_SUCCEEDED |
The pipeline completed successfully. |
PIPELINE_STATE_FAILED |
The pipeline failed. |
PIPELINE_STATE_CANCELLING |
The pipeline is being cancelled. From this state, the pipeline may only go to either PIPELINE_STATE_SUCCEEDED, PIPELINE_STATE_FAILED or PIPELINE_STATE_CANCELLED. |
PIPELINE_STATE_CANCELLED |
The pipeline has been cancelled. |
PIPELINE_STATE_PAUSED |
The pipeline has been stopped, and can be resumed. |
PipelineTaskDetail
The runtime detail of a task execution.
task_id
int64
Output only. The system generated ID of the task.
parent_task_id
int64
Output only. The id of the parent task if the task is within a component scope. Empty if the task is at the root level.
task_name
string
Output only. The user specified name of the task that is defined in pipeline_spec
.
Output only. Task create time.
Output only. Task start time.
Output only. Task end time.
Output only. The detailed execution info.
Output only. State of the task.
Output only. The execution metadata of the task.
Output only. The error that occurred during task execution. Only populated when the task's state is FAILED or CANCELLED.
Output only. A list of task status. This field keeps a record of task status evolving over time.
Output only. The runtime input artifacts of the task.
Output only. The runtime output artifacts of the task.
ArtifactList
A list of artifact metadata.
Output only. A list of artifact metadata.
PipelineTaskStatus
A single record of the task status.
Output only. Update time of this status.
Output only. The state of the task.
Output only. The error that occurred during the state. May be set when the state is any of the non-final state (PENDING/RUNNING/CANCELLING) or FAILED state. If the state is FAILED, the error here is final and not going to be retried. If the state is a non-final state, the error indicates a system-error being retried.
State
Specifies state of TaskExecution
Enums | |
---|---|
STATE_UNSPECIFIED |
Unspecified. |
PENDING |
Specifies pending state for the task. |
RUNNING |
Specifies task is being executed. |
SUCCEEDED |
Specifies task completed successfully. |
CANCEL_PENDING |
Specifies Task cancel is in pending state. |
CANCELLING |
Specifies task is being cancelled. |
CANCELLED |
Specifies task was cancelled. |
FAILED |
Specifies task failed. |
SKIPPED |
Specifies task was skipped due to cache hit. |
NOT_TRIGGERED |
Specifies that the task was not triggered because the task's trigger policy is not satisfied. The trigger policy is specified in the condition field of PipelineJob.pipeline_spec . |
PipelineTaskExecutorDetail
The runtime detail of a pipeline executor.
Union field details
.
details
can be only one of the following:
Output only. The detailed info for a container executor.
Output only. The detailed info for a custom job executor.
ContainerDetail
The detail of a container execution. It contains the job names of the lifecycle of a container execution.
main_job
string
Output only. The name of the CustomJob
for the main container execution.
pre_caching_check_job
string
Output only. The name of the CustomJob
for the pre-caching-check container execution. This job will be available if the PipelineJob.pipeline_spec
specifies the pre_caching_check
hook in the lifecycle events.
failed_main_jobs[]
string
Output only. The names of the previously failed CustomJob
for the main container executions. The list includes the all attempts in chronological order.
failed_pre_caching_check_jobs[]
string
Output only. The names of the previously failed CustomJob
for the pre-caching-check container executions. This job will be available if the PipelineJob.pipeline_spec
specifies the pre_caching_check
hook in the lifecycle events. The list includes the all attempts in chronological order.
CustomJobDetail
PipelineTemplateMetadata
Pipeline template metadata if PipelineJob.template_uri
is from supported template registry. Currently, the only supported registry is Artifact Registry.
version
string
The version_name in artifact registry.
Will always be presented in output if the PipelineJob.template_uri
is from supported template registry.
Format is "sha256:abcdef123456...".
Port
Represents a network port in a container.
container_port
int32
The number of the port to expose on the pod's IP address. Must be a valid port number, between 1 and 65535 inclusive.
PredefinedSplit
Assigns input data to training, validation, and test sets based on the value of a provided key.
Supported only for tabular Datasets.
key
string
Required. The key is a name of one of the Dataset's data columns. The value of the key (either the label's value or value in the column) must be one of {training
, validation
, test
}, and it defines to which set the given piece of data is assigned. If for a piece of data the key is not present or has an invalid value, that piece is ignored by the pipeline.
PredictRequest
Request message for PredictionService.Predict
.
endpoint
string
Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}
Required. The instances that are the input to the prediction call. A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the prediction call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' Model's
PredictSchemata's
instance_schema_uri
.
The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' Model's
PredictSchemata's
parameters_schema_uri
.
PredictRequestResponseLoggingConfig
Configuration for logging request-response to a BigQuery table.
enabled
bool
If logging is enabled or not.
sampling_rate
double
Percentage of requests to be logged, expressed as a fraction in range(0,1].
BigQuery table for logging. If only given a project, a new dataset will be created with name logging_<endpoint-display-name>_<endpoint-id>
where request_response_logging
PredictResponse
Response message for PredictionService.Predict
.
The predictions that are the output of the predictions call. The schema of any single prediction may be specified via Endpoint's DeployedModels' Model's
PredictSchemata's
prediction_schema_uri
.
deployed_model_id
string
ID of the Endpoint's DeployedModel that served this prediction.
model
string
Output only. The resource name of the Model which is deployed as the DeployedModel that this prediction hits.
model_version_id
string
Output only. The version ID of the Model which is deployed as the DeployedModel that this prediction hits.
model_display_name
string
Output only. The display name
of the Model which is deployed as the DeployedModel that this prediction hits.
Output only. Request-level metadata returned by the model. The metadata type will be dependent upon the model implementation.
PredictSchemata
Contains the schemata used in Model's predictions and explanations via PredictionService.Predict
, PredictionService.Explain
and BatchPredictionJob
.
instance_schema_uri
string
Immutable. Points to a YAML file stored on Google Cloud Storage describing the format of a single instance, which are used in PredictRequest.instances
, ExplainRequest.instances
and BatchPredictionJob.input_config
. The schema is defined as an OpenAPI 3.0.2 Schema Object. AutoML Models always have this field populated by Vertex AI. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
parameters_schema_uri
string
Immutable. Points to a YAML file stored on Google Cloud Storage describing the parameters of prediction and explanation via PredictRequest.parameters
, ExplainRequest.parameters
and BatchPredictionJob.model_parameters
. The schema is defined as an OpenAPI 3.0.2 Schema Object. AutoML Models always have this field populated by Vertex AI, if no parameters are supported, then it is set to an empty string. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
prediction_schema_uri
string
Immutable. Points to a YAML file stored on Google Cloud Storage describing the format of a single prediction produced by this Model, which are returned via PredictResponse.predictions
, ExplainResponse.explanations
, and BatchPredictionJob.output_config
. The schema is defined as an OpenAPI 3.0.2 Schema Object. AutoML Models always have this field populated by Vertex AI. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
Presets
Preset configuration for example-based explanations
The modality of the uploaded model, which automatically configures the distance measurement and feature normalization for the underlying example index and queries. If your model does not precisely fit one of these types, it is okay to choose the closest type.
Preset option controlling parameters for speed-precision trade-off when querying for examples. If omitted, defaults to PRECISE
.
Modality
Preset option controlling parameters for different modalities
Enums | |
---|---|
MODALITY_UNSPECIFIED |
Should not be set. Added as a recommended best practice for enums |
IMAGE |
IMAGE modality |
TEXT |
TEXT modality |
TABULAR |
TABULAR modality |
Query
Preset option controlling parameters for query speed-precision trade-off
Enums | |
---|---|
PRECISE |
More precise neighbors as a trade-off against slower response. |
FAST |
Faster response as a trade-off against less precise neighbors. |
PrivateEndpoints
PrivateEndpoints proto is used to provide paths for users to send requests privately. To send request via private service access, use predict_http_uri, explain_http_uri or health_http_uri. To send request via private service connect, use service_attachment.
predict_http_uri
string
Output only. Http(s) path to send prediction requests.
explain_http_uri
string
Output only. Http(s) path to send explain requests.
health_http_uri
string
Output only. Http(s) path to send health check requests.
service_attachment
string
Output only. The name of the service attachment resource. Populated if private service connect is enabled.
PrivateServiceConnectConfig
Represents configuration for private service connect.
enable_private_service_connect
bool
Required. If true, expose the IndexEndpoint via private service connect.
project_allowlist[]
string
A list of Projects from which the forwarding rule will target the service attachment.
Probe
Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic.
period_seconds
int32
How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. Must be less than timeout_seconds.
Maps to Kubernetes probe argument 'periodSeconds'.
timeout_seconds
int32
Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. Must be greater or equal to period_seconds.
Maps to Kubernetes probe argument 'timeoutSeconds'.
Union field probe_type
.
probe_type
can be only one of the following:
ExecAction probes the health of a container by executing a command.
ExecAction
ExecAction specifies a command to execute.
command[]
string
Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.
PscAutomatedEndpoints
PscAutomatedEndpoints defines the output of the forwarding rule automatically created by each PscAutomationConfig.
project_id
string
Corresponding project_id in pscAutomationConfigs
network
string
Corresponding network in pscAutomationConfigs.
match_address
string
Ip Address created by the automated forwarding rule.
PublisherModel
A Model Garden Publisher Model.
name
string
Output only. The resource name of the PublisherModel.
version_id
string
Output only. Immutable. The version ID of the PublisherModel. A new version is committed when a new model version is uploaded under an existing model id. It is an auto-incrementing decimal number in string representation.
Required. Indicates the open source category of the publisher model.
Optional. Supported call-to-action options.
frameworks[]
string
Optional. Additional information about the model's Frameworks.
Optional. Indicates the launch stage of the model.
Optional. Indicates the state of the model version.
publisher_model_template
string
Optional. Output only. Immutable. Used to indicate this model has a publisher model and provide the template of the publisher model resource name.
Optional. The schemata that describes formats of the PublisherModel's predictions and explanations as given and returned via PredictionService.Predict
.
CallToAction
Actions could take on this Publisher Model.
Optional. To view Rest API docs.
Optional. Open notebook of the PublisherModel.
Optional. Create application using the PublisherModel.
Optional. Open fine-tuning pipeline of the PublisherModel.
Optional. Open prompt-tuning pipeline of the PublisherModel.
Optional. Open Genie / Playground.
Optional. Deploy the PublisherModel to Vertex Endpoint.
Optional. Deploy PublisherModel to Google Kubernetes Engine.
Optional. Open in Generation AI Studio.
Optional. Request for access.
Optional. Open evaluation pipeline of the PublisherModel.
Optional. Open notebooks of the PublisherModel.
Optional. Open fine-tuning pipelines of the PublisherModel.
Deploy
Model metadata that is needed for UploadModel or DeployModel/CreateEndpoint requests.
model_display_name
string
Optional. Default model display name.
Optional. Large model reference. When this is set, model_artifact_spec is not needed.
Optional. The specification of the container that is to be used when deploying this Model in Vertex AI. Not present for Large Models.
artifact_uri
string
Optional. The path to the directory containing the Model artifact and any of its supporting files.
title
string
Required. The title of the regional resource reference.
public_artifact_uri
string
Optional. The signed URI for ephemeral Cloud Storage access to model artifact.
prediction_resources
. The prediction (for example, the machine) resources that the DeployedModel uses. prediction_resources
can be only one of the following:A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration.
A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration.
deploy_task_name
string
Optional. The name of the deploy task (e.g., "text to image generation").
Optional. Metadata information about this deployment config.
DeployMetadata
Metadata information about the deployment for managing deployment config.
labels
map<string, string>
Optional. Labels for the deployment. For managing deployment config like verifying, source of deployment config, etc.
sample_request
string
Optional. Sample request for deployed endpoint.
DeployGke
Configurations for PublisherModel GKE deployment
gke_yaml_configs[]
string
Optional. GKE deployment configuration in yaml format.
OpenFineTuningPipelines
Open fine tuning pipelines.
Required. Regional resource references to fine tuning pipelines.
OpenNotebooks
Open notebooks.
Required. Regional resource references to notebooks.
RegionalResourceReferences
The regional resource name or the URI. Key is region, e.g., us-central1, europe-west2, global, etc..
Required.
title
string
Required.
resource_title
string
Optional. Title of the resource.
resource_use_case
string
Optional. Use case (CUJ) of the resource.
resource_description
string
Optional. Description of the resource.
ViewRestApi
Rest API docs.
Required.
title
string
Required. The title of the view rest API.
Documentation
A named piece of documentation.
title
string
Required. E.g., OVERVIEW, USE CASES, DOCUMENTATION, SDK & SAMPLES, JAVA, NODE.JS, etc..
content
string
Required. Content of this piece of document (in Markdown format).
LaunchStage
An enum representing the launch stage of a PublisherModel.
Enums | |
---|---|
LAUNCH_STAGE_UNSPECIFIED |
The model launch stage is unspecified. |
EXPERIMENTAL |
Used to indicate the PublisherModel is at Experimental launch stage, available to a small set of customers. |
PRIVATE_PREVIEW |
Used to indicate the PublisherModel is at Private Preview launch stage, only available to a small set of customers, although a larger set of customers than an Experimental launch. Previews are the first launch stage used to get feedback from customers. |
PUBLIC_PREVIEW |
Used to indicate the PublisherModel is at Public Preview launch stage, available to all customers, although not supported for production workloads. |
GA |
Used to indicate the PublisherModel is at GA launch stage, available to all customers and ready for production workload. |
OpenSourceCategory
An enum representing the open source category of a PublisherModel.
Enums | |
---|---|
OPEN_SOURCE_CATEGORY_UNSPECIFIED |
The open source category is unspecified, which should not be used. |
PROPRIETARY |
Used to indicate the PublisherModel is not open sourced. |
GOOGLE_OWNED_OSS_WITH_GOOGLE_CHECKPOINT |
Used to indicate the PublisherModel is a Google-owned open source model w/ Google checkpoint. |
THIRD_PARTY_OWNED_OSS_WITH_GOOGLE_CHECKPOINT |
Used to indicate the PublisherModel is a 3p-owned open source model w/ Google checkpoint. |
GOOGLE_OWNED_OSS |
Used to indicate the PublisherModel is a Google-owned pure open source model. |
THIRD_PARTY_OWNED_OSS |
Used to indicate the PublisherModel is a 3p-owned pure open source model. |
ResourceReference
Reference to a resource.
Union field reference
.
reference
can be only one of the following:
uri
string
The URI of the resource.
resource_name
string
The resource name of the Google Cloud resource.
use_case
(deprecated)
string
Use case (CUJ) of the resource.
description
(deprecated)
string
Description of the resource.
VersionState
An enum representing the state of the PublicModelVersion.
Enums | |
---|---|
VERSION_STATE_UNSPECIFIED |
The version state is unspecified. |
VERSION_STATE_STABLE |
Used to indicate the version is stable. |
VERSION_STATE_UNSTABLE |
Used to indicate the version is unstable. |
PublisherModelView
View enumeration of PublisherModel.
Enums | |
---|---|
PUBLISHER_MODEL_VIEW_UNSPECIFIED |
The default / unset value. The API will default to the BASIC view. |
PUBLISHER_MODEL_VIEW_BASIC |
Include basic metadata about the publisher model, but not the full contents. |
PUBLISHER_MODEL_VIEW_FULL |
Include everything. |
PUBLISHER_MODEL_VERSION_VIEW_BASIC |
Include: VersionId, ModelVersionExternalName, and SupportedActions. |
PurgeArtifactsMetadata
Details of operations that perform MetadataService.PurgeArtifacts
.
Operation metadata for purging Artifacts.
PurgeArtifactsRequest
Request message for MetadataService.PurgeArtifacts
.
parent
string
Required. The metadata store to purge Artifacts from. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}
filter
string
Required. A required filter matching the Artifacts to be purged. E.g., update_time <= 2020-11-19T11:30:00-04:00
.
force
bool
Optional. Flag to indicate to actually perform the purge. If force
is set to false, the method will return a sample of Artifact names that would be deleted.
PurgeArtifactsResponse
Response message for MetadataService.PurgeArtifacts
.
purge_count
int64
The number of Artifacts that this request deleted (or, if force
is false, the number of Artifacts that will be deleted). This can be an estimate.
purge_sample[]
string
A sample of the Artifact names that will be deleted. Only populated if force
is set to false. The maximum number of samples is 100 (it is possible to return fewer).
PurgeContextsMetadata
Details of operations that perform MetadataService.PurgeContexts
.
Operation metadata for purging Contexts.
PurgeContextsRequest
Request message for MetadataService.PurgeContexts
.
parent
string
Required. The metadata store to purge Contexts from. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}
filter
string
Required. A required filter matching the Contexts to be purged. E.g., update_time <= 2020-11-19T11:30:00-04:00
.
force
bool
Optional. Flag to indicate to actually perform the purge. If force
is set to false, the method will return a sample of Context names that would be deleted.
PurgeContextsResponse
Response message for MetadataService.PurgeContexts
.
purge_count
int64
The number of Contexts that this request deleted (or, if force
is false, the number of Contexts that will be deleted). This can be an estimate.
purge_sample[]
string
A sample of the Context names that will be deleted. Only populated if force
is set to false. The maximum number of samples is 100 (it is possible to return fewer).
PurgeExecutionsMetadata
Details of operations that perform MetadataService.PurgeExecutions
.
Operation metadata for purging Executions.
PurgeExecutionsRequest
Request message for MetadataService.PurgeExecutions
.
parent
string
Required. The metadata store to purge Executions from. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}
filter
string
Required. A required filter matching the Executions to be purged. E.g., update_time <= 2020-11-19T11:30:00-04:00
.
force
bool
Optional. Flag to indicate to actually perform the purge. If force
is set to false, the method will return a sample of Execution names that would be deleted.
PurgeExecutionsResponse
Response message for MetadataService.PurgeExecutions
.
purge_count
int64
The number of Executions that this request deleted (or, if force
is false, the number of Executions that will be deleted). This can be an estimate.
purge_sample[]
string
A sample of the Execution names that will be deleted. Only populated if force
is set to false. The maximum number of samples is 100 (it is possible to return fewer).
PythonPackageSpec
The spec of a Python packaged code.
executor_image_uri
string
Required. The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.
package_uris[]
string
Required. The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.
Authorization requires the following IAM permission on the specified resource packageUris
:
storage.objects.get
python_module
string
Required. The Python module name to run after installing the packages.
args[]
string
Command line arguments to be passed to the Python task.
Environment variables to be passed to the python module. Maximum limit is 100.
QueryArtifactLineageSubgraphRequest
Request message for MetadataService.QueryArtifactLineageSubgraph
.
artifact
string
Required. The resource name of the Artifact whose Lineage needs to be retrieved as a LineageSubgraph. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/artifacts/{artifact}
The request may error with FAILED_PRECONDITION if the number of Artifacts, the number of Executions, or the number of Events that would be returned for the Context exceeds 1000.
max_hops
int32
Specifies the size of the lineage graph in terms of number of hops from the specified artifact. Negative Value: INVALID_ARGUMENT error is returned 0: Only input artifact is returned. No value: Transitive closure is performed to return the complete graph.
filter
string
Filter specifying the boolean condition for the Artifacts to satisfy in order to be part of the Lineage Subgraph. The syntax to define filter query is based on https://google.aip.dev/160. The supported set of filters include the following:
- Attribute filtering: For example:
display_name = "test"
Supported fields include:name
,display_name
,uri
,state
,schema_title
,create_time
, andupdate_time
. Time fields, such ascreate_time
andupdate_time
, require values specified in RFC-3339 format. For example:create_time = "2020-11-19T11:30:00-04:00"
- Metadata field: To filter on metadata fields use traversal operation as follows:
metadata.<field_name>.<type_value>
. For example:metadata.field_1.number_value = 10.0
In case the field name contains special characters (such as colon), one can embed it inside double quote. For example:metadata."field:1".number_value = 10.0
Each of the above supported filter types can be combined together using logical operators (AND
& OR
). Maximum nested expression depth allowed is 5.
For example: display_name = "test" AND metadata.field1.bool_value = true
.
QueryContextLineageSubgraphRequest
Request message for MetadataService.QueryContextLineageSubgraph
.
context
string
Required. The resource name of the Context whose Artifacts and Executions should be retrieved as a LineageSubgraph. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/contexts/{context}
The request may error with FAILED_PRECONDITION if the number of Artifacts, the number of Executions, or the number of Events that would be returned for the Context exceeds 1000.
QueryDeployedModelsRequest
Request message for QueryDeployedModels method.
deployment_resource_pool
string
Required. The name of the target DeploymentResourcePool to query. Format: projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}
page_size
int32
The maximum number of DeployedModels to return. The service may return fewer than this value.
page_token
string
A page token, received from a previous QueryDeployedModels
call. Provide this to retrieve the subsequent page.
When paginating, all other parameters provided to QueryDeployedModels
must match the call that provided the page token.
QueryDeployedModelsResponse
Response message for QueryDeployedModels method.
DEPRECATED Use deployed_model_refs instead.
next_page_token
string
A token, which can be sent as page_token
to retrieve the next page. If this field is omitted, there are no subsequent pages.
References to the DeployedModels that share the specified deploymentResourcePool.
total_deployed_model_count
int32
The total number of DeployedModels on this DeploymentResourcePool.
total_endpoint_count
int32
The total number of Endpoints that have DeployedModels on this DeploymentResourcePool.
QueryExecutionInputsAndOutputsRequest
Request message for MetadataService.QueryExecutionInputsAndOutputs
.
execution
string
Required. The resource name of the Execution whose input and output Artifacts should be retrieved as a LineageSubgraph. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/executions/{execution}
QuestionAnsweringCorrectnessInput
Input for question answering correctness metric.
Required. Spec for question answering correctness score metric.
Required. Question answering correctness instance.
QuestionAnsweringCorrectnessInstance
Spec for question answering correctness instance.
prediction
string
Required. Output of the evaluated model.
reference
string
Optional. Ground truth used to compare against the prediction.
context
string
Optional. Text provided as context to answer the question.
instruction
string
Required. The question asked and other instruction in the inference prompt.
QuestionAnsweringCorrectnessResult
Spec for question answering correctness result.
explanation
string
Output only. Explanation for question answering correctness score.
score
float
Output only. Question Answering Correctness score.
confidence
float
Output only. Confidence for question answering correctness score.
QuestionAnsweringCorrectnessSpec
Spec for question answering correctness metric.
use_reference
bool
Optional. Whether to use instance.reference to compute question answering correctness.
version
int32
Optional. Which version to use for evaluation.
QuestionAnsweringHelpfulnessInput
Input for question answering helpfulness metric.
Required. Spec for question answering helpfulness score metric.
Required. Question answering helpfulness instance.
QuestionAnsweringHelpfulnessInstance
Spec for question answering helpfulness instance.
prediction
string
Required. Output of the evaluated model.
reference
string
Optional. Ground truth used to compare against the prediction.
context
string
Optional. Text provided as context to answer the question.
instruction
string
Required. The question asked and other instruction in the inference prompt.
QuestionAnsweringHelpfulnessResult
Spec for question answering helpfulness result.
explanation
string
Output only. Explanation for question answering helpfulness score.
score
float
Output only. Question Answering Helpfulness score.
confidence
float
Output only. Confidence for question answering helpfulness score.
QuestionAnsweringHelpfulnessSpec
Spec for question answering helpfulness metric.
use_reference
bool
Optional. Whether to use instance.reference to compute question answering helpfulness.
version
int32
Optional. Which version to use for evaluation.
QuestionAnsweringQualityInput
Input for question answering quality metric.
Required. Spec for question answering quality score metric.
Required. Question answering quality instance.
QuestionAnsweringQualityInstance
Spec for question answering quality instance.
prediction
string
Required. Output of the evaluated model.
reference
string
Optional. Ground truth used to compare against the prediction.
context
string
Required. Text to answer the question.
instruction
string
Required. Question Answering prompt for LLM.
QuestionAnsweringQualityResult
Spec for question answering quality result.
explanation
string
Output only. Explanation for question answering quality score.
score
float
Output only. Question Answering Quality score.
confidence
float
Output only. Confidence for question answering quality score.
QuestionAnsweringQualitySpec
Spec for question answering quality score metric.
use_reference
bool
Optional. Whether to use instance.reference to compute question answering quality.
version
int32
Optional. Which version to use for evaluation.
QuestionAnsweringRelevanceInput
Input for question answering relevance metric.
Required. Spec for question answering relevance score metric.
Required. Question answering relevance instance.
QuestionAnsweringRelevanceInstance
Spec for question answering relevance instance.
prediction
string
Required. Output of the evaluated model.
reference
string
Optional. Ground truth used to compare against the prediction.
context
string
Optional. Text provided as context to answer the question.
instruction
string
Required. The question asked and other instruction in the inference prompt.
QuestionAnsweringRelevanceResult
Spec for question answering relevance result.
explanation
string
Output only. Explanation for question answering relevance score.
score
float
Output only. Question Answering Relevance score.
confidence
float
Output only. Confidence for question answering relevance score.
QuestionAnsweringRelevanceSpec
Spec for question answering relevance metric.
use_reference
bool
Optional. Whether to use instance.reference to compute question answering relevance.
version
int32
Optional. Which version to use for evaluation.
RawPredictRequest
Request message for PredictionService.RawPredict
.
endpoint
string
Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}
The prediction input. Supports HTTP headers and arbitrary data payload.
A DeployedModel
may have an upper limit on the number of instances it supports per request. When this limit it is exceeded for an AutoML model, the RawPredict
method returns an error. When this limit is exceeded for a custom-trained model, the behavior varies depending on the model.
You can specify the schema for each instance in the predict_schemata.instance_schema_uri
field when you create a Model
. This schema applies when you deploy the Model
as a DeployedModel
to an Endpoint
and use the RawPredict
method.
RayMetricSpec
Configuration for the Ray metrics.
disabled
bool
Optional. Flag to disable the Ray metrics collection.
RaySpec
Configuration information for the Ray cluster. For experimental launch, Ray cluster creation and Persistent cluster creation are 1:1 mapping: We will provision all the nodes within the Persistent cluster as Ray nodes.
image_uri
string
Optional. Default image for user to choose a preferred ML framework (for example, TensorFlow or Pytorch) by choosing from Vertex prebuilt images. Either this or the resource_pool_images is required. Use this field if you need all the resource pools to have the same Ray image. Otherwise, use the {@code resource_pool_images} field.
resource_pool_images
map<string, string>
Optional. Required if image_uri isn't set. A map of resource_pool_id to prebuild Ray image if user need to use different images for different head/worker pools. This map needs to cover all the resource pool ids. Example: { "ray_head_node_pool": "head image" "ray_worker_node_pool1": "worker image" "ray_worker_node_pool2": "another worker image" }
head_node_resource_pool_id
string
Optional. This will be used to indicate which resource pool will serve as the Ray head node(the first node within that pool). Will use the machine from the first workerpool as the head node by default if this field isn't set.
Optional. Ray metrics configurations.
ReadFeatureValuesRequest
Request message for FeaturestoreOnlineServingService.ReadFeatureValues
.
entity_type
string
Required. The resource name of the EntityType for the entity being read. Value format: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entityType}
. For example, for a machine learning model predicting user clicks on a website, an EntityType ID could be user
.
entity_id
string
Required. ID for a specific entity. For example, for a machine learning model predicting user clicks on a website, an entity ID could be user_123
.
Required. Selector choosing Features of the target EntityType.
ReadFeatureValuesResponse
Response message for FeaturestoreOnlineServingService.ReadFeatureValues
.
Response header.
Entity view with Feature values. This may be the entity in the Featurestore if values for all Features were requested, or a projection of the entity in the Featurestore if values for only some Features were requested.
EntityView
Entity view with Feature values.
entity_id
string
ID of the requested entity.
Each piece of data holds the k requested values for one requested Feature. If no values for the requested Feature exist, the corresponding cell will be empty. This has the same size and is in the same order as the features from the header ReadFeatureValuesResponse.header
.
Data
Container to hold value(s), successive in time, for one Feature from the request.
Union field data
.
data
can be only one of the following:
Feature value if a single value is requested.
Feature values list if values, successive in time, are requested. If the requested number of values is greater than the number of existing Feature values, nonexistent values are omitted instead of being returned as empty.
FeatureDescriptor
Metadata for requested Features.
id
string
Feature ID.
Header
Response header with metadata for the requested ReadFeatureValuesRequest.entity_type
and Features.
entity_type
string
The resource name of the EntityType from the ReadFeatureValuesRequest
. Value format: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entityType}
.
List of Feature metadata corresponding to each piece of ReadFeatureValuesResponse.EntityView.data
.
ReadTensorboardBlobDataRequest
Request message for TensorboardService.ReadTensorboardBlobData
.
time_series
string
Required. The resource name of the TensorboardTimeSeries to list Blobs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}/runs/{run}/timeSeries/{time_series}
blob_ids[]
string
IDs of the blobs to read.
ReadTensorboardBlobDataResponse
Response message for TensorboardService.ReadTensorboardBlobData
.
Blob messages containing blob bytes.
ReadTensorboardSizeRequest
Request message for TensorboardService.ReadTensorboardSize
.
tensorboard
string
Required. The name of the Tensorboard resource. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
ReadTensorboardSizeResponse
Response message for TensorboardService.ReadTensorboardSize
.
storage_size_byte
int64
Payload storage size for the TensorBoard
ReadTensorboardTimeSeriesDataRequest
Request message for TensorboardService.ReadTensorboardTimeSeriesData
.
tensorboard_time_series
string
Required. The resource name of the TensorboardTimeSeries to read data from. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}/runs/{run}/timeSeries/{time_series}
max_data_points
int32
The maximum number of TensorboardTimeSeries' data to return.
This value should be a positive integer. This value can be set to -1 to return all data.
filter
string
Reads the TensorboardTimeSeries' data that match the filter expression.
ReadTensorboardTimeSeriesDataResponse
Response message for TensorboardService.ReadTensorboardTimeSeriesData
.
The returned time series data.
ReadTensorboardUsageRequest
Request message for TensorboardService.ReadTensorboardUsage
.
tensorboard
string
Required. The name of the Tensorboard resource. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
ReadTensorboardUsageResponse
Response message for TensorboardService.ReadTensorboardUsage
.
Maps year-month (YYYYMM) string to per month usage data.
PerMonthUsageData
Per month usage data
Usage data for each user in the given month.
PerUserUsageData
Per user usage data.
username
string
User's username
view_count
int64
Number of times the user has read data within the Tensorboard.
RebaseTunedModelOperationMetadata
Runtime operation information for GenAiTuningService.RebaseTunedModel
.
The common part of the operation generic information.
RebaseTunedModelRequest
Request message for GenAiTuningService.RebaseTunedModel
.
parent
string
Required. The resource name of the Location into which to rebase the Model. Format: projects/{project}/locations/{location}
Required. TunedModel reference to retrieve the legacy model information.
Optional. The TuningJob to be updated. Users can use this TuningJob field to overwrite tuning configs.
Optional. The Google Cloud Storage location to write the artifacts.
deploy_to_same_endpoint
bool
Optional. By default, bison to gemini migration will always create new model/endpoint, but for gemini-1.0 to gemini-1.5 migration, we default deploy to the same endpoint. See details in this Section.
RebootPersistentResourceOperationMetadata
Details of operations that perform reboot PersistentResource.
Operation metadata for PersistentResource.
progress_message
string
Progress Message for Reboot LRO
RebootPersistentResourceRequest
Request message for PersistentResourceService.RebootPersistentResource
.
name
string
Required. The name of the PersistentResource resource. Format: projects/{project_id_or_number}/locations/{location_id}/persistentResources/{persistent_resource_id}
RemoveContextChildrenRequest
Request message for [MetadataService.DeleteContextChildrenRequest][].
context
string
Required. The resource name of the parent Context.
Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/contexts/{context}
child_contexts[]
string
The resource names of the child Contexts.
RemoveContextChildrenResponse
This type has no fields.
Response message for MetadataService.RemoveContextChildren
.
RemoveDatapointsRequest
Request message for IndexService.RemoveDatapoints
index
string
Required. The name of the Index resource to be updated. Format: projects/{project}/locations/{location}/indexes/{index}
datapoint_ids[]
string
A list of datapoint ids to be deleted.
RemoveDatapointsResponse
This type has no fields.
Response message for IndexService.RemoveDatapoints
ReservationAffinity
A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity.
Required. Specifies the reservation affinity type.
key
string
Optional. Corresponds to the label key of a reservation resource. To target a SPECIFIC_RESERVATION by name, use compute.googleapis.com/reservation-name
as the key and specify the name of your reservation as its value.
values[]
string
Optional. Corresponds to the label values of a reservation resource. This must be the full resource name of the reservation.
Type
Identifies a type of reservation affinity.
Enums | |
---|---|
TYPE_UNSPECIFIED |
Default value. This should not be used. |
NO_RESERVATION |
Do not consume from any reserved capacity, only use on-demand. |
ANY_RESERVATION |
Consume any reservation available, falling back to on-demand. |
SPECIFIC_RESERVATION |
Consume from a specific reservation. When chosen, the reservation must be identified via the key and values fields. |
ResourcePool
Represents the spec of a group of resources of the same type, for example machine type, disk, and accelerators, in a PersistentResource.
id
string
Immutable. The unique ID in a PersistentResource for referring to this resource pool. User can specify it if necessary. Otherwise, it's generated automatically.
Required. Immutable. The specification of a single machine.
Optional. Disk spec for the machine in this node pool.
used_replica_count
int64
Output only. The number of machines currently in use by training jobs for this resource pool. Will replace idle_replica_count.
Optional. Optional spec to configure GKE or Ray-on-Vertex autoscaling
replica_count
int64
Optional. The total number of machines to use for this resource pool.
AutoscalingSpec
The min/max number of replicas allowed if enabling autoscaling
min_replica_count
int64
Optional. min replicas in the node pool, must be ≤ replica_count and < max_replica_count or will throw error. For autoscaling enabled Ray-on-Vertex, we allow min_replica_count of a resource_pool to be 0 to match the OSS Ray behavior(https://docs.ray.io/en/latest/cluster/vms/user-guides/configuring-autoscaling.html#cluster-config-parameters). As for Persistent Resource, the min_replica_count must be > 0, we added a corresponding validation inside CreatePersistentResourceRequestValidator.java.
max_replica_count
int64
Optional. max replicas in the node pool, must be ≥ replica_count and > min_replica_count or will throw error
ResourceRuntime
Persistent Cluster runtime information as output
access_uris
map<string, string>
Output only. URIs for user to connect to the Cluster. Example: { "RAY_HEAD_NODE_INTERNAL_IP": "head-node-IP:10001" "RAY_DASHBOARD_URI": "ray-dashboard-address:8888" }
ResourceRuntimeSpec
Configuration for the runtime on a PersistentResource instance, including but not limited to:
- Service accounts used to run the workloads.
- Whether to make it a dedicated Ray Cluster.
Optional. Configure the use of workload identity on the PersistentResource
Optional. Ray cluster configuration. Required when creating a dedicated RayCluster on the PersistentResource.
ResourcesConsumed
Statistics information about resource consumption.
replica_hours
double
Output only. The number of replica hours used. Note that many replicas may run in parallel, and additionally any given work may be queued for some time. Therefore this value is not strictly related to wall time.
RestoreDatasetVersionOperationMetadata
Runtime operation information for DatasetService.RestoreDatasetVersion
.
The common part of the operation metadata.
RestoreDatasetVersionRequest
Request message for DatasetService.RestoreDatasetVersion
.
name
string
Required. The name of the DatasetVersion resource. Format: projects/{project}/locations/{location}/datasets/{dataset}/datasetVersions/{dataset_version}
ResumeModelDeploymentMonitoringJobRequest
Request message for JobService.ResumeModelDeploymentMonitoringJob
.
name
string
Required. The resource name of the ModelDeploymentMonitoringJob to resume. Format: projects/{project}/locations/{location}/modelDeploymentMonitoringJobs/{model_deployment_monitoring_job}
ResumeScheduleRequest
Request message for ScheduleService.ResumeSchedule
.
name
string
Required. The name of the Schedule resource to be resumed. Format: projects/{project}/locations/{location}/schedules/{schedule}
catch_up
bool
Optional. Whether to backfill missed runs when the schedule is resumed from PAUSED state. If set to true, all missed runs will be scheduled. New runs will be scheduled after the backfill is complete. This will also update Schedule.catch_up
field. Default to false.
Retrieval
Defines a retrieval tool that model can call to access external knowledge.
disable_attribution
(deprecated)
bool
Optional. Deprecated. This option is no longer supported.
source
. The source of the retrieval. source
can be only one of the following:Set to use data source powered by Vertex AI Search.
RetrievalMetadata
Metadata related to retrieval in the grounding flow.
google_search_dynamic_retrieval_score
float
Optional. Score indicating how likely information from Google Search could help answer the prompt. The score is in the range [0, 1]
, where 0 is the least likely and 1 is the most likely. This score is only populated when Google Search grounding and dynamic retrieval is enabled. It will be compared to the threshold to determine whether to trigger Google Search.
RougeInput
Input for rouge metric.
Required. Spec for rouge score metric.
Required. Repeated rouge instances.
RougeInstance
Spec for rouge instance.
prediction
string
Required. Output of the evaluated model.
reference
string
Required. Ground truth used to compare against the prediction.
RougeMetricValue
Rouge metric value for an instance.
score
float
Output only. Rouge score.
RougeResults
Results for rouge metric.
Output only. Rouge metric values.
RougeSpec
Spec for rouge score metric - calculates the recall of n-grams in prediction as compared to reference - returns a score ranging between 0 and 1.
rouge_type
string
Optional. Supported rouge types are rougen[1-9], rougeL, and rougeLsum.
use_stemmer
bool
Optional. Whether to use stemmer to compute rouge score.
split_summaries
bool
Optional. Whether to split summaries while using rougeLsum.
SafetyInput
Input for safety metric.
Required. Spec for safety metric.
Required. Safety instance.
SafetyInstance
Spec for safety instance.
prediction
string
Required. Output of the evaluated model.
SafetyRating
Safety rating corresponding to the generated content.
Output only. Harm category.
Output only. Harm probability levels in the content.
probability_score
float
Output only. Harm probability score.
Output only. Harm severity levels in the content.
severity_score
float
Output only. Harm severity score.
blocked
bool
Output only. Indicates whether the content was filtered out because of this rating.
HarmProbability
Harm probability levels in the content.
Enums | |
---|---|
HARM_PROBABILITY_UNSPECIFIED |
Harm probability unspecified. |
NEGLIGIBLE |
Negligible level of harm. |
LOW |
Low level of harm. |
MEDIUM |
Medium level of harm. |
HIGH |
High level of harm. |
HarmSeverity
Harm severity levels.
Enums | |
---|---|
HARM_SEVERITY_UNSPECIFIED |
Harm severity unspecified. |
HARM_SEVERITY_NEGLIGIBLE |
Negligible level of harm severity. |
HARM_SEVERITY_LOW |
Low level of harm severity. |
HARM_SEVERITY_MEDIUM |
Medium level of harm severity. |
HARM_SEVERITY_HIGH |
High level of harm severity. |
SafetyResult
Spec for safety result.
explanation
string
Output only. Explanation for safety score.
score
float
Output only. Safety score.
confidence
float
Output only. Confidence for safety score.
SafetySetting
Safety settings.
Required. Harm category.
Required. The harm block threshold.
Optional. Specify if the threshold is used for probability or severity score. If not specified, the threshold is used for probability score.
HarmBlockMethod
Probability vs severity.
Enums | |
---|---|
HARM_BLOCK_METHOD_UNSPECIFIED |
The harm block method is unspecified. |
SEVERITY |
The harm block method uses both probability and severity scores. |
PROBABILITY |
The harm block method uses the probability score. |
HarmBlockThreshold
Probability based thresholds levels for blocking.
Enums | |
---|---|
HARM_BLOCK_THRESHOLD_UNSPECIFIED |
Unspecified harm block threshold. |
BLOCK_LOW_AND_ABOVE |
Block low threshold and above (i.e. block more). |
BLOCK_MEDIUM_AND_ABOVE |
Block medium threshold and above. |
BLOCK_ONLY_HIGH |
Block only high threshold (i.e. block less). |
BLOCK_NONE |
Block none. |
OFF |
Turn off the safety filter. |
SafetySpec
Spec for safety metric.
version
int32
Optional. Which version to use for evaluation.
SampledShapleyAttribution
An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features.
path_count
int32
Required. The number of feature permutations to consider when approximating the Shapley values.
Valid range of its value is [1, 50], inclusively.
SamplingStrategy
Sampling Strategy for logging, can be for both training and prediction dataset.
Random sample config. Will support more sampling strategies later.
RandomSampleConfig
Requests are randomly selected.
sample_rate
double
Sample rate (0, 1]
SavedQuery
A SavedQuery is a view of the dataset. It references a subset of annotations by problem type and filters.
name
string
Output only. Resource name of the SavedQuery.
display_name
string
Required. The user-defined name of the SavedQuery. The name can be up to 128 characters long and can consist of any UTF-8 characters.
Some additional information about the SavedQuery.
Output only. Timestamp when this SavedQuery was created.
Output only. Timestamp when SavedQuery was last updated.
annotation_filter
string
Output only. Filters on the Annotations in the dataset.
problem_type
string
Required. Problem type of the SavedQuery. Allowed values:
- IMAGE_CLASSIFICATION_SINGLE_LABEL
- IMAGE_CLASSIFICATION_MULTI_LABEL
- IMAGE_BOUNDING_POLY
- IMAGE_BOUNDING_BOX
- TEXT_CLASSIFICATION_SINGLE_LABEL
- TEXT_CLASSIFICATION_MULTI_LABEL
- TEXT_EXTRACTION
- TEXT_SENTIMENT
- VIDEO_CLASSIFICATION
- VIDEO_OBJECT_TRACKING
annotation_spec_count
int32
Output only. Number of AnnotationSpecs in the context of the SavedQuery.
etag
string
Used to perform a consistent read-modify-write update. If not set, a blind "overwrite" update happens.
support_automl_training
bool
Output only. If the Annotations belonging to the SavedQuery can be used for AutoML training.
Scalar
One point viewable on a scalar metric plot.
value
double
Value of the point at this step / timestamp.
Schedule
An instance of a Schedule periodically schedules runs to make API calls based on user specified time specification and API request type.
name
string
Immutable. The resource name of the Schedule.
display_name
string
Required. User provided name of the Schedule. The name can be up to 128 characters long and can consist of any UTF-8 characters.
Optional. Timestamp after which the first run can be scheduled. Default to Schedule create time if not specified.
Optional. Timestamp after which no new runs can be scheduled. If specified, The schedule will be completed when either end_time is reached or when scheduled_run_count >= max_run_count. If not specified, new runs will keep getting scheduled until this Schedule is paused or deleted. Already scheduled runs will be allowed to complete. Unset if not specified.
max_run_count
int64
Optional. Maximum run count of the schedule. If specified, The schedule will be completed when either started_run_count >= max_run_count or when end_time is reached. If not specified, new runs will keep getting scheduled until this Schedule is paused or deleted. Already scheduled runs will be allowed to complete. Unset if not specified.
started_run_count
int64
Output only. The number of runs started by this schedule.
Output only. The state of this Schedule.
Output only. Timestamp when this Schedule was created.
Output only. Timestamp when this Schedule was updated.
Output only. Timestamp when this Schedule should schedule the next run. Having a next_run_time in the past means the runs are being started behind schedule.
Output only. Timestamp when this Schedule was last paused. Unset if never paused.
Output only. Timestamp when this Schedule was last resumed. Unset if never resumed from pause.
max_concurrent_run_count
int64
Required. Maximum number of runs that can be started concurrently for this Schedule. This is the limit for starting the scheduled requests and not the execution of the operations/jobs created by the requests (if applicable).
allow_queueing
bool
Optional. Whether new scheduled runs can be queued when max_concurrent_runs limit is reached. If set to true, new runs will be queued instead of skipped. Default to false.
catch_up
bool
Output only. Whether to backfill missed runs when the schedule is resumed from PAUSED state. If set to true, all missed runs will be scheduled. New runs will be scheduled after the backfill is complete. Default to false.
Output only. Response of the last scheduled run. This is the response for starting the scheduled requests and not the execution of the operations/jobs created by the requests (if applicable). Unset if no run has been scheduled yet.
time_specification
. Required. The time specification to launch scheduled runs. time_specification
can be only one of the following:cron
string
Cron schedule (https://en.wikipedia.org/wiki/Cron) to launch scheduled runs. To explicitly set a timezone to the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database. For example, "CRON_TZ=America/New_York 1 * * * *", or "TZ=America/New_York 1 * * * *".
request
. Required. The API request template to launch the scheduled runs. User-specified ID is not supported in the request template. request
can be only one of the following:Request for PipelineService.CreatePipelineJob
. CreatePipelineJobRequest.parent field is required (format: projects/{project}/locations/{location}).
Request for NotebookService.CreateNotebookExecutionJob
.
RunResponse
Status of a scheduled run.
The scheduled run time based on the user-specified schedule.
run_response
string
The response of the scheduled run.
State
Possible state of the schedule.
Enums | |
---|---|
STATE_UNSPECIFIED |
Unspecified. |
ACTIVE |
The Schedule is active. Runs are being scheduled on the user-specified timespec. |
PAUSED |
The schedule is paused. No new runs will be created until the schedule is resumed. Already started runs will be allowed to complete. |
COMPLETED |
The Schedule is completed. No new runs will be scheduled. Already started runs will be allowed to complete. Schedules in completed state cannot be paused or resumed. |
Scheduling
All parameters related to queuing and scheduling of custom jobs.
The maximum job running time. The default is 7 days.
restart_job_on_worker_restart
bool
Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
Optional. This determines which type of scheduling strategy to use.
disable_retries
bool
Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides Scheduling.restart_job_on_worker_restart
to false.
Optional. This is the maximum duration that a job will wait for the requested resources to be provisioned if the scheduling strategy is set to [Strategy.DWS_FLEX_START]. If set to 0, the job will wait indefinitely. The default is 24 hours.
Strategy
Optional. This determines which type of scheduling strategy to use. Right now users have two options such as STANDARD which will use regular on demand resources to schedule the job, the other is SPOT which would leverage spot resources alongwith regular resources to schedule the job.
Enums | |
---|---|
STRATEGY_UNSPECIFIED |
Strategy will default to STANDARD. |
ON_DEMAND |
Deprecated. Regular on-demand provisioning strategy. |
LOW_COST |
Deprecated. Low cost by making potential use of spot resources. |
STANDARD |
Standard provisioning strategy uses regular on-demand resources. |
SPOT |
Spot provisioning strategy uses spot resources. |
FLEX_START |
Flex Start strategy uses DWS to queue for resources. |
Schema
Schema is used to define the format of input/output data. Represents a select subset of an OpenAPI 3.0 schema object. More fields may be added in the future as needed.
Optional. The type of the data.
format
string
Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc
title
string
Optional. The title of the Schema.
description
string
Optional. The description of the data.
nullable
bool
Optional. Indicates if the value may be null.
Optional. Default value of the data.
Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY.
min_items
int64
Optional. Minimum number of the elements for Type.ARRAY.
max_items
int64
Optional. Maximum number of the elements for Type.ARRAY.
enum[]
string
Optional. Possible values of the element of primitive type with enum format. Examples: 1. We can define direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]} 2. We can define apartment number as : {type:INTEGER, format:enum, enum:["101", "201", "301"]}
Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT.
property_ordering[]
string
Optional. The order of the properties. Not a standard field in open api spec. Only used to support the order of the properties.
required[]
string
Optional. Required properties of Type.OBJECT.
min_properties
int64
Optional. Minimum number of the properties for Type.OBJECT.
max_properties
int64
Optional. Maximum number of the properties for Type.OBJECT.
minimum
double
Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER
maximum
double
Optional. Maximum value of the Type.INTEGER and Type.NUMBER
min_length
int64
Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING
max_length
int64
Optional. Maximum length of the Type.STRING
pattern
string
Optional. Pattern of the Type.STRING to restrict a string to a regular expression.
Optional. Example of the object. Will only populated when the object is the root.
Optional. The value should be validated against any (one or more) of the subschemas in the list.
SearchDataItemsRequest
Request message for DatasetService.SearchDataItems
.
dataset
string
Required. The resource name of the Dataset from which to search DataItems. Format: projects/{project}/locations/{location}/datasets/{dataset}
saved_query
(deprecated)
string
The resource name of a SavedQuery(annotation set in UI). Format: projects/{project}/locations/{location}/datasets/{dataset}/savedQueries/{saved_query}
All of the search will be done in the context of this SavedQuery.
data_labeling_job
string
The resource name of a DataLabelingJob. Format: projects/{project}/locations/{location}/dataLabelingJobs/{data_labeling_job}
If this field is set, all of the search will be done in the context of this DataLabelingJob.
data_item_filter
string
An expression for filtering the DataItem that will be returned.
data_item_id
- for = or !=.labeled
- for = or !=.has_annotation(ANNOTATION_SPEC_ID)
- true only for DataItem that have at least one annotation with annotation_spec_id =ANNOTATION_SPEC_ID
in the context of SavedQuery or DataLabelingJob.
For example:
data_item=1
has_annotation(5)
annotations_filter
(deprecated)
string
An expression for filtering the Annotations that will be returned per DataItem. * annotation_spec_id
- for = or !=.
annotation_filters[]
string
An expression that specifies what Annotations will be returned per DataItem. Annotations satisfied either of the conditions will be returned. * annotation_spec_id
- for = or !=. Must specify saved_query_id=
- saved query id that annotations should belong to.
Mask specifying which fields of DataItemView
to read.
annotations_limit
int32
If set, only up to this many of Annotations will be returned per DataItemView. The maximum value is 1000. If not set, the maximum value will be used.
page_size
int32
Requested page size. Server may return fewer results than requested. Default and maximum page size is 100.
order_by
(deprecated)
string
A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending.
page_token
string
A token identifying a page of results for the server to return Typically obtained via SearchDataItemsResponse.next_page_token
of the previous DatasetService.SearchDataItems
call.
Union field order
.
order
can be only one of the following:
order_by_data_item
string
A comma-separated list of data item fields to order by, sorted in ascending order. Use "desc" after a field name for descending.
Expression that allows ranking results based on annotation's property.
OrderByAnnotation
Expression that allows ranking results based on annotation's property.
saved_query
string
Required. Saved query of the Annotation. Only Annotations belong to this saved query will be considered for ordering.
order_by
string
A comma-separated list of annotation fields to order by, sorted in ascending order. Use "desc" after a field name for descending. Must also specify saved_query.
SearchDataItemsResponse
Response message for DatasetService.SearchDataItems
.
The DataItemViews read.
next_page_token
string
A token to retrieve next page of results. Pass to SearchDataItemsRequest.page_token
to obtain that page.
SearchEntryPoint
Google search entry point.
rendered_content
string
Optional. Web content snippet that can be embedded in a web page or an app webview.
sdk_blob
bytes
Optional. Base64 encoded JSON representing array of <search term, search url> tuple.
SearchFeaturesRequest
Request message for FeaturestoreService.SearchFeatures
.
location
string
Required. The resource name of the Location to search Features. Format: projects/{project}/locations/{location}
query
string
Query string that is a conjunction of field-restricted queries and/or field-restricted filters. Field-restricted queries and filters can be combined using AND
to form a conjunction.
A field query is in the form FIELD:QUERY. This implicitly checks if QUERY exists as a substring within Feature's FIELD. The QUERY and the FIELD are converted to a sequence of words (i.e. tokens) for comparison. This is done by:
- Removing leading/trailing whitespace and tokenizing the search value. Characters that are not one of alphanumeric
[a-zA-Z0-9]
, underscore_
, or asterisk*
are treated as delimiters for tokens.*
is treated as a wildcard that matches characters within a token. - Ignoring case.
- Prepending an asterisk to the first and appending an asterisk to the last token in QUERY.
A QUERY must be either a singular token or a phrase. A phrase is one or multiple words enclosed in double quotation marks ("). With phrases, the order of the words is important. Words in the phrase must be matching in order and consecutively.
Supported FIELDs for field-restricted queries:
feature_id
description
entity_type_id
Examples:
feature_id: foo
--> Matches a Feature with ID containing the substringfoo
(eg.foo
,foofeature
,barfoo
).feature_id: foo*feature
--> Matches a Feature with ID containing the substringfoo*feature
(eg.foobarfeature
).feature_id: foo AND description: bar
--> Matches a Feature with ID containing the substringfoo
and description containing the substringbar
.
Besides field queries, the following exact-match filters are supported. The exact-match filters do not support wildcards. Unlike field-restricted queries, exact-match filters are case-sensitive.
feature_id
: Supports = comparisons.description
: Supports = comparisons. Multi-token filters should be enclosed in quotes.entity_type_id
: Supports = comparisons.value_type
: Supports = and != comparisons.labels
: Supports key-value equality as well as key presence.featurestore_id
: Supports = comparisons.
Examples:
description = "foo bar"
--> Any Feature with description exactly equal tofoo bar
value_type = DOUBLE
--> Features whose type is DOUBLE.labels.active = yes AND labels.env = prod
--> Features having both (active: yes) and (env: prod) labels.labels.env: *
--> Any Feature which has a label withenv
as the key.
page_size
int32
The maximum number of Features to return. The service may return fewer than this value. If unspecified, at most 100 Features will be returned. The maximum value is 100; any value greater than 100 will be coerced to 100.
page_token
string
A page token, received from a previous FeaturestoreService.SearchFeatures
call. Provide this to retrieve the subsequent page.
When paginating, all other parameters provided to FeaturestoreService.SearchFeatures
, except page_size
, must match the call that provided the page token.
SearchFeaturesResponse
Response message for FeaturestoreService.SearchFeatures
.
The Features matching the request.
Fields returned:
name
description
labels
create_time
update_time
next_page_token
string
A token, which can be sent as SearchFeaturesRequest.page_token
to retrieve the next page. If this field is omitted, there are no subsequent pages.
SearchMigratableResourcesRequest
Request message for MigrationService.SearchMigratableResources
.
parent
string
Required. The location that the migratable resources should be searched from. It's the Vertex AI location that the resources can be migrated to, not the resources' original location. Format: projects/{project}/locations/{location}
page_size
int32
The standard page size. The default and maximum value is 100.
page_token
string
The standard page token.
filter
string
A filter for your search. You can use the following types of filters:
- Resource type filters. The following strings filter for a specific type of
MigratableResource
:ml_engine_model_version:*
automl_model:*
automl_dataset:*
data_labeling_dataset:*
- "Migrated or not" filters. The following strings filter for resources that either have or have not already been migrated:
last_migrate_time:*
filters for migrated resources.NOT last_migrate_time:*
filters for not yet migrated resources.
SearchMigratableResourcesResponse
Response message for MigrationService.SearchMigratableResources
.
All migratable resources that can be migrated to the location specified in the request.
next_page_token
string
The standard next-page token. The migratable_resources may not fill page_size in SearchMigratableResourcesRequest even when there are subsequent pages.
SearchModelDeploymentMonitoringStatsAnomaliesRequest
Request message for JobService.SearchModelDeploymentMonitoringStatsAnomalies
.
model_deployment_monitoring_job
string
Required. ModelDeploymentMonitoring Job resource name. Format: projects/{project}/locations/{location}/modelDeploymentMonitoringJobs/{model_deployment_monitoring_job}
deployed_model_id
string
Required. The DeployedModel ID of the [ModelDeploymentMonitoringObjectiveConfig.deployed_model_id].
feature_display_name
string
The feature display name. If specified, only return the stats belonging to this feature. Format: ModelMonitoringStatsAnomalies.FeatureHistoricStatsAnomalies.feature_display_name
, example: "user_destination".
Required. Objectives of the stats to retrieve.
page_size
int32
The standard list page size.
page_token
string
A page token received from a previous JobService.SearchModelDeploymentMonitoringStatsAnomalies
call.
The earliest timestamp of stats being generated. If not set, indicates fetching stats till the earliest possible one.
The latest timestamp of stats being generated. If not set, indicates feching stats till the latest possible one.
StatsAnomaliesObjective
Stats requested for specific objective.
top_feature_count
int32
If set, all attribution scores between SearchModelDeploymentMonitoringStatsAnomaliesRequest.start_time
and SearchModelDeploymentMonitoringStatsAnomaliesRequest.end_time
are fetched, and page token doesn't take effect in this case. Only used to retrieve attribution score for the top Features which has the highest attribution score in the latest monitoring run.
SearchModelDeploymentMonitoringStatsAnomaliesResponse
Response message for JobService.SearchModelDeploymentMonitoringStatsAnomalies
.
Stats retrieved for requested objectives. There are at most 1000 ModelMonitoringStatsAnomalies.FeatureHistoricStatsAnomalies.prediction_stats
in the response.
next_page_token
string
The page token that can be used by the next JobService.SearchModelDeploymentMonitoringStatsAnomalies
call.
SearchNearestEntitiesRequest
The request message for FeatureOnlineStoreService.SearchNearestEntities
.
feature_view
string
Required. FeatureView resource format projects/{project}/locations/{location}/featureOnlineStores/{featureOnlineStore}/featureViews/{featureView}
Required. The query.
return_full_entity
bool
Optional. If set to true, the full entities (including all vector values and metadata) of the nearest neighbors are returned; otherwise only entity id of the nearest neighbors will be returned. Note that returning full entities will significantly increase the latency and cost of the query.
SearchNearestEntitiesResponse
Response message for FeatureOnlineStoreService.SearchNearestEntities
The nearest neighbors of the query entity.
Segment
Segment of the content.
part_index
int32
Output only. The index of a Part object within its parent Content object.
start_index
int32
Output only. Start index in the given Part, measured in bytes. Offset from the start of the Part, inclusive, starting at zero.
end_index
int32
Output only. End index in the given Part, measured in bytes. Offset from the start of the Part, exclusive, starting at zero.
text
string
Output only. The text corresponding to the segment from the response.
ServiceAccountSpec
Configuration for the use of custom service account to run the workloads.
enable_custom_service_account
bool
Required. If true, custom user-managed service account is enforced to run any workloads (for example, Vertex Jobs) on the resource. Otherwise, uses the Vertex AI Custom Code Service Agent.
service_account
string
Optional. Required when all below conditions are met * enable_custom_service_account
is true; * any runtime is specified via ResourceRuntimeSpec
on creation time, for example, Ray
The users must have iam.serviceAccounts.actAs
permission on this service account and then the specified runtime containers will run as it.
Do not set this field if you want to submit jobs using custom service account to this PersistentResource after creation, but only specify the service_account
inside the job.
ShieldedVmConfig
A set of Shielded Instance options. See Images using supported Shielded VM features.
enable_secure_boot
bool
Defines whether the instance has Secure Boot enabled.
Secure Boot helps ensure that the system only runs authentic software by verifying the digital signature of all boot components, and halting the boot process if signature verification fails.
SmoothGradConfig
Config for SmoothGrad approximation of gradients.
When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf
noisy_sample_count
int32
The number of gradient samples to use for approximation. The higher this number, the more accurate the gradient is, but the runtime complexity increases by this factor as well. Valid range of its value is [1, 50]. Defaults to 3.
GradientNoiseSigma
. Represents the standard deviation of the gaussian kernel that will be used to add noise to the interpolated inputs prior to computing gradients. GradientNoiseSigma
can be only one of the following:noise_sigma
float
This is a single float value and will be used to add noise to all the features. Use this field when all features are normalized to have the same distribution: scale to range [0, 1], [-1, 1] or z-scoring, where features are normalized to have 0-mean and 1-variance. Learn more about normalization.
For best results the recommended value is about 10% - 20% of the standard deviation of the input feature. Refer to section 3.2 of the SmoothGrad paper: https://arxiv.org/pdf/1706.03825.pdf. Defaults to 0.1.
If the distribution is different per feature, set feature_noise_sigma
instead for each feature.
This is similar to noise_sigma
, but provides additional flexibility. A separate noise sigma can be provided for each feature, which is useful if their distributions are different. No noise is added to features that are not set. If this field is unset, noise_sigma
will be used for all features.
SpecialistPool
SpecialistPool represents customers' own workforce to work on their data labeling jobs. It includes a group of specialist managers and workers. Managers are responsible for managing the workers in this pool as well as customers' data labeling jobs associated with this pool. Customers create specialist pool as well as start data labeling jobs on Cloud, managers and workers handle the jobs using CrowdCompute console.
name
string
Required. The resource name of the SpecialistPool.
display_name
string
Required. The user-defined name of the SpecialistPool. The name can be up to 128 characters long and can consist of any UTF-8 characters. This field should be unique on project-level.
specialist_managers_count
int32
Output only. The number of managers in this SpecialistPool.
specialist_manager_emails[]
string
The email addresses of the managers in the SpecialistPool.
pending_data_labeling_jobs[]
string
Output only. The resource name of the pending data labeling jobs.
specialist_worker_emails[]
string
The email addresses of workers in the SpecialistPool.
StartNotebookRuntimeOperationMetadata
Metadata information for NotebookService.StartNotebookRuntime
.
The operation generic information.
progress_message
string
A human-readable message that shows the intermediate progress details of NotebookRuntime.
StartNotebookRuntimeRequest
Request message for NotebookService.StartNotebookRuntime
.
name
string
Required. The name of the NotebookRuntime resource to be started. Instead of checking whether the name is in valid NotebookRuntime resource name format, directly throw NotFound exception if there is no such NotebookRuntime in spanner.
StartNotebookRuntimeResponse
This type has no fields.
Response message for NotebookService.StartNotebookRuntime
.
StopTrialRequest
Request message for VizierService.StopTrial
.
name
string
Required. The Trial's name. Format: projects/{project}/locations/{location}/studies/{study}/trials/{trial}
StratifiedSplit
Assigns input data to the training, validation, and test sets so that the distribution of values found in the categorical column (as specified by the key
field) is mirrored within each split. The fraction values determine the relative sizes of the splits.
For example, if the specified column has three values, with 50% of the rows having value "A", 25% value "B", and 25% value "C", and the split fractions are specified as 80/10/10, then the training set will constitute 80% of the training data, with about 50% of the training set rows having the value "A" for the specified column, about 25% having the value "B", and about 25% having the value "C".
Only the top 500 occurring values are used; any values not in the top 500 values are randomly assigned to a split. If less than three rows contain a specific value, those rows are randomly assigned.
Supported only for tabular Datasets.
training_fraction
double
The fraction of the input data that is to be used to train the Model.
validation_fraction
double
The fraction of the input data that is to be used to validate the Model.
test_fraction
double
The fraction of the input data that is to be used to evaluate the Model.
key
string
Required. The key is a name of one of the Dataset's data columns. The key provided must be for a categorical column.
StreamDirectPredictRequest
Request message for PredictionService.StreamDirectPredict
.
The first message must contain endpoint
field and optionally [input][]. The subsequent messages must contain [input][].
endpoint
string
Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}
Optional. The prediction input.
Optional. The parameters that govern the prediction.
StreamDirectPredictResponse
Response message for PredictionService.StreamDirectPredict
.
The prediction output.
The parameters that govern the prediction.
StreamDirectRawPredictRequest
Request message for PredictionService.StreamDirectRawPredict
.
The first message must contain endpoint
and method_name
fields and optionally input
. The subsequent messages must contain input
. method_name
in the subsequent messages have no effect.
endpoint
string
Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}
method_name
string
Optional. Fully qualified name of the API method being invoked to perform predictions.
Format: /namespace.Service/Method/
Example: /tensorflow.serving.PredictionService/Predict
input
bytes
Optional. The prediction input.
StreamDirectRawPredictResponse
Response message for PredictionService.StreamDirectRawPredict
.
output
bytes
The prediction output.
StreamRawPredictRequest
Request message for PredictionService.StreamRawPredict
.
endpoint
string
Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}
The prediction input. Supports HTTP headers and arbitrary data payload.
StreamingPredictRequest
Request message for PredictionService.StreamingPredict
.
The first message must contain endpoint
field and optionally [input][]. The subsequent messages must contain [input][].
endpoint
string
Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}
The prediction input.
The parameters that govern the prediction.
StreamingPredictResponse
Response message for PredictionService.StreamingPredict
.
The prediction output.
The parameters that govern the prediction.
StreamingRawPredictRequest
Request message for PredictionService.StreamingRawPredict
.
The first message must contain endpoint
and method_name
fields and optionally input
. The subsequent messages must contain input
. method_name
in the subsequent messages have no effect.
endpoint
string
Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}
method_name
string
Fully qualified name of the API method being invoked to perform predictions.
Format: /namespace.Service/Method/
Example: /tensorflow.serving.PredictionService/Predict
input
bytes
The prediction input.
StreamingRawPredictResponse
Response message for PredictionService.StreamingRawPredict
.
output
bytes
The prediction output.
StreamingReadFeatureValuesRequest
Request message for [FeaturestoreOnlineServingService.StreamingFeatureValuesRead][].
entity_type
string
Required. The resource name of the entities' type. Value format: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entityType}
. For example, for a machine learning model predicting user clicks on a website, an EntityType ID could be user
.
entity_ids[]
string
Required. IDs of entities to read Feature values of. The maximum number of IDs is 100. For example, for a machine learning model predicting user clicks on a website, an entity ID could be user_123
.
Required. Selector choosing Features of the target EntityType. Feature IDs will be deduplicated.
StringArray
A list of string values.
values[]
string
A list of string values.
StructFieldValue
One field of a Struct (or object) type feature value.
name
string
Name of the field in the struct feature.
The value for this field.
StructValue
Struct (or object) type feature value.
A list of field values.
Study
A message representing a Study.
name
string
Output only. The name of a study. The study's globally unique identifier. Format: projects/{project}/locations/{location}/studies/{study}
display_name
string
Required. Describes the Study, default value is empty string.
Required. Configuration of the Study.
Output only. The detailed state of a Study.
Output only. Time at which the study was created.
inactive_reason
string
Output only. A human readable reason why the Study is inactive. This should be empty if a study is ACTIVE or COMPLETED.
State
Describes the Study state.
Enums | |
---|---|
STATE_UNSPECIFIED |
The study state is unspecified. |
ACTIVE |
The study is active. |
INACTIVE |
The study is stopped due to an internal error. |
COMPLETED |
The study is done when the service exhausts the parameter search space or max_trial_count is reached. |
StudySpec
Represents specification of a Study.
Required. Metric specs for the Study.
Required. The set of parameters to tune.
The search algorithm specified for the Study.
The observation noise level of the study. Currently only supported by the Vertex AI Vizier service. Not supported by HyperparameterTuningJob or TrainingPipeline.
Describe which measurement selection type will be used
Union field automated_stopping_spec
.
automated_stopping_spec
can be only one of the following:
The automated early stopping spec using decay curve rule.
The automated early stopping spec using median rule.
The automated early stopping spec using convex stopping rule.
Conditions for automated stopping of a Study. Enable automated stopping by configuring at least one condition.
Algorithm
The available search algorithms for the Study.
Enums | |
---|---|
ALGORITHM_UNSPECIFIED |
The default algorithm used by Vertex AI for hyperparameter tuning and Vertex AI Vizier. |
GRID_SEARCH |
Simple grid search within the feasible space. To use grid search, all parameters must be INTEGER , CATEGORICAL , or DISCRETE . |
RANDOM_SEARCH |
Simple random search within the feasible space. |
ConvexAutomatedStoppingSpec
Configuration for ConvexAutomatedStoppingSpec. When there are enough completed trials (configured by min_measurement_count), for pending trials with enough measurements and steps, the policy first computes an overestimate of the objective value at max_num_steps according to the slope of the incomplete objective value curve. No prediction can be made if the curve is completely flat. If the overestimation is worse than the best objective value of the completed trials, this pending trial will be early-stopped, but a last measurement will be added to the pending trial with max_num_steps and predicted objective value from the autoregression model.
max_step_count
int64
Steps used in predicting the final objective for early stopped trials. In general, it's set to be the same as the defined steps in training / tuning. If not defined, it will learn it from the completed trials. When use_steps is false, this field is set to the maximum elapsed seconds.
min_step_count
int64
Minimum number of steps for a trial to complete. Trials which do not have a measurement with step_count > min_step_count won't be considered for early stopping. It's ok to set it to 0, and a trial can be early stopped at any stage. By default, min_step_count is set to be one-tenth of the max_step_count. When use_elapsed_duration is true, this field is set to the minimum elapsed seconds.
min_measurement_count
int64
The minimal number of measurements in a Trial. Early-stopping checks will not trigger if less than min_measurement_count+1 completed trials or pending trials with less than min_measurement_count measurements. If not defined, the default value is 5.
learning_rate_parameter_name
string
The hyper-parameter name used in the tuning job that stands for learning rate. Leave it blank if learning rate is not in a parameter in tuning. The learning_rate is used to estimate the objective value of the ongoing trial.
use_elapsed_duration
bool
This bool determines whether or not the rule is applied based on elapsed_secs or steps. If use_elapsed_duration==false, the early stopping decision is made according to the predicted objective values according to the target steps. If use_elapsed_duration==true, elapsed_secs is used instead of steps. Also, in this case, the parameters max_num_steps and min_num_steps are overloaded to contain max_elapsed_seconds and min_elapsed_seconds.
update_all_stopped_trials
bool
ConvexAutomatedStoppingSpec by default only updates the trials that needs to be early stopped using a newly trained auto-regressive model. When this flag is set to True, all stopped trials from the beginning are potentially updated in terms of their final_measurement
. Also, note that the training logic of autoregressive models is different in this case. Enabling this option has shown better results and this may be the default option in the future.
DecayCurveAutomatedStoppingSpec
The decay curve automated stopping rule builds a Gaussian Process Regressor to predict the final objective value of a Trial based on the already completed Trials and the intermediate measurements of the current Trial. Early stopping is requested for the current Trial if there is very low probability to exceed the optimal value found so far.
use_elapsed_duration
bool
True if Measurement.elapsed_duration
is used as the x-axis of each Trials Decay Curve. Otherwise, Measurement.step_count
will be used as the x-axis.
MeasurementSelectionType
This indicates which measurement to use if/when the service automatically selects the final measurement from previously reported intermediate measurements. Choose this based on two considerations: A) Do you expect your measurements to monotonically improve? If so, choose LAST_MEASUREMENT. On the other hand, if you're in a situation where your system can "over-train" and you expect the performance to get better for a while but then start declining, choose BEST_MEASUREMENT. B) Are your measurements significantly noisy and/or irreproducible? If so, BEST_MEASUREMENT will tend to be over-optimistic, and it may be better to choose LAST_MEASUREMENT. If both or neither of (A) and (B) apply, it doesn't matter which selection type is chosen.
Enums | |
---|---|
MEASUREMENT_SELECTION_TYPE_UNSPECIFIED |
Will be treated as LAST_MEASUREMENT. |
LAST_MEASUREMENT |
Use the last measurement reported. |
BEST_MEASUREMENT |
Use the best measurement reported. |
MedianAutomatedStoppingSpec
The median automated stopping rule stops a pending Trial if the Trial's best objective_value is strictly below the median 'performance' of all completed Trials reported up to the Trial's last measurement. Currently, 'performance' refers to the running average of the objective values reported by the Trial in each measurement.
use_elapsed_duration
bool
True if median automated stopping rule applies on Measurement.elapsed_duration
. It means that elapsed_duration field of latest measurement of current Trial is used to compute median objective value for each completed Trials.
MetricSpec
Represents a metric to optimize.
metric_id
string
Required. The ID of the metric. Must not contain whitespaces and must be unique amongst all MetricSpecs.
Required. The optimization goal of the metric.
Used for safe search. In the case, the metric will be a safety metric. You must provide a separate metric for objective metric.
GoalType
The available types of optimization goals.
Enums | |
---|---|
GOAL_TYPE_UNSPECIFIED |
Goal Type will default to maximize. |
MAXIMIZE |
Maximize the goal metric. |
MINIMIZE |
Minimize the goal metric. |
SafetyMetricConfig
Used in safe optimization to specify threshold levels and risk tolerance.
safety_threshold
double
Safety threshold (boundary value between safe and unsafe). NOTE that if you leave SafetyMetricConfig unset, a default value of 0 will be used.
desired_min_safe_trials_fraction
double
Desired minimum fraction of safe trials (over total number of trials) that should be targeted by the algorithm at any time during the study (best effort). This should be between 0.0 and 1.0 and a value of 0.0 means that there is no minimum and an algorithm proceeds without targeting any specific fraction. A value of 1.0 means that the algorithm attempts to only Suggest safe Trials.
ObservationNoise
Describes the noise level of the repeated observations.
"Noisy" means that the repeated observations with the same Trial parameters may lead to different metric evaluations.
Enums | |
---|---|
OBSERVATION_NOISE_UNSPECIFIED |
The default noise level chosen by Vertex AI. |
LOW |
Vertex AI assumes that the objective function is (nearly) perfectly reproducible, and will never repeat the same Trial parameters. |
HIGH |
Vertex AI will estimate the amount of noise in metric evaluations, it may repeat the same Trial parameters more than once. |
ParameterSpec
Represents a single parameter to optimize.
parameter_id
string
Required. The ID of the parameter. Must not contain whitespaces and must be unique amongst all ParameterSpecs.
How the parameter should be scaled. Leave unset for CATEGORICAL
parameters.
A conditional parameter node is active if the parameter's value matches the conditional node's parent_value_condition.
If two items in conditional_parameter_specs have the same name, they must have disjoint parent_value_condition.
Union field parameter_value_spec
.
parameter_value_spec
can be only one of the following:
The value spec for a 'DOUBLE' parameter.
The value spec for an 'INTEGER' parameter.
The value spec for a 'CATEGORICAL' parameter.
The value spec for a 'DISCRETE' parameter.
CategoricalValueSpec
Value specification for a parameter in CATEGORICAL
type.
values[]
string
Required. The list of possible categories.
default_value
string
A default value for a CATEGORICAL
parameter that is assumed to be a relatively good starting point. Unset value signals that there is no offered starting point.
Currently only supported by the Vertex AI Vizier service. Not supported by HyperparameterTuningJob or TrainingPipeline.
ConditionalParameterSpec
Represents a parameter spec with condition from its parent parameter.
Required. The spec for a conditional parameter.
parent_value_condition
. A set of parameter values from the parent ParameterSpec's feasible space. parent_value_condition
can be only one of the following:The spec for matching values from a parent parameter of DISCRETE
type.
The spec for matching values from a parent parameter of INTEGER
type.
The spec for matching values from a parent parameter of CATEGORICAL
type.
CategoricalValueCondition
Represents the spec to match categorical values from parent parameter.
values[]
string
Required. Matches values of the parent parameter of 'CATEGORICAL' type. All values must exist in categorical_value_spec
of parent parameter.
DiscreteValueCondition
Represents the spec to match discrete values from parent parameter.
values[]
double
Required. Matches values of the parent parameter of 'DISCRETE' type. All values must exist in discrete_value_spec
of parent parameter.
The Epsilon of the value matching is 1e-10.
IntValueCondition
Represents the spec to match integer values from parent parameter.
values[]
int64
Required. Matches values of the parent parameter of 'INTEGER' type. All values must lie in integer_value_spec
of parent parameter.
DiscreteValueSpec
Value specification for a parameter in DISCRETE
type.
values[]
double
Required. A list of possible values. The list should be in increasing order and at least 1e-10 apart. For instance, this parameter might have possible settings of 1.5, 2.5, and 4.0. This list should not contain more than 1,000 values.
default_value
double
A default value for a DISCRETE
parameter that is assumed to be a relatively good starting point. Unset value signals that there is no offered starting point. It automatically rounds to the nearest feasible discrete point.
Currently only supported by the Vertex AI Vizier service. Not supported by HyperparameterTuningJob or TrainingPipeline.
DoubleValueSpec
Value specification for a parameter in DOUBLE
type.
min_value
double
Required. Inclusive minimum value of the parameter.
max_value
double
Required. Inclusive maximum value of the parameter.
default_value
double
A default value for a DOUBLE
parameter that is assumed to be a relatively good starting point. Unset value signals that there is no offered starting point.
Currently only supported by the Vertex AI Vizier service. Not supported by HyperparameterTuningJob or TrainingPipeline.
IntegerValueSpec
Value specification for a parameter in INTEGER
type.
min_value
int64
Required. Inclusive minimum value of the parameter.
max_value
int64
Required. Inclusive maximum value of the parameter.
default_value
int64
A default value for an INTEGER
parameter that is assumed to be a relatively good starting point. Unset value signals that there is no offered starting point.
Currently only supported by the Vertex AI Vizier service. Not supported by HyperparameterTuningJob or TrainingPipeline.
ScaleType
The type of scaling that should be applied to this parameter.
Enums | |
---|---|
SCALE_TYPE_UNSPECIFIED |
By default, no scaling is applied. |
UNIT_LINEAR_SCALE |
Scales the feasible space to (0, 1) linearly. |
UNIT_LOG_SCALE |
Scales the feasible space logarithmically to (0, 1). The entire feasible space must be strictly positive. |
UNIT_REVERSE_LOG_SCALE |
Scales the feasible space "reverse" logarithmically to (0, 1). The result is that values close to the top of the feasible space are spread out more than points near the bottom. The entire feasible space must be strictly positive. |
StudyStoppingConfig
The configuration (stopping conditions) for automated stopping of a Study. Conditions include trial budgets, time budgets, and convergence detection.
If true, a Study enters STOPPING_ASAP whenever it would normally enters STOPPING state.
The bottom line is: set to true if you want to interrupt on-going evaluations of Trials as soon as the study stopping condition is met. (Please see Study.State documentation for the source of truth).
Each "stopping rule" in this proto specifies an "if" condition. Before Vizier would generate a new suggestion, it first checks each specified stopping rule, from top to bottom in this list. Note that the first few rules (e.g. minimum_runtime_constraint, min_num_trials) will prevent other stopping rules from being evaluated until they are met. For example, setting min_num_trials=5
and always_stop_after= 1 hour
means that the Study will ONLY stop after it has 5 COMPLETED trials, even if more than an hour has passed since its creation. It follows the first applicable rule (whose "if" condition is satisfied) to make a stopping decision. If none of the specified rules are applicable, then Vizier decides that the study should not stop. If Vizier decides that the study should stop, the study enters STOPPING state (or STOPPING_ASAP if should_stop_asap = true). IMPORTANT: The automatic study state transition happens precisely as described above; that is, deleting trials or updating StudyConfig NEVER automatically moves the study state back to ACTIVE. If you want to resume a Study that was stopped, 1) change the stopping conditions if necessary, 2) activate the study, and then 3) ask for suggestions. If the specified time or duration has not passed, do not stop the study.
If the specified time or duration has passed, stop the study.
If there are fewer than this many COMPLETED trials, do not stop the study.
If there are more than this many trials, stop the study.
If the objective value has not improved for this many consecutive trials, stop the study.
WARNING: Effective only for single-objective studies.
If the objective value has not improved for this much time, stop the study.
WARNING: Effective only for single-objective studies.
StudyTimeConstraint
SuggestTrialsMetadata
Details of operations that perform Trials suggestion.
Operation metadata for suggesting Trials.
client_id
string
The identifier of the client that is requesting the suggestion.
If multiple SuggestTrialsRequests have the same client_id
, the service will return the identical suggested Trial if the Trial is pending, and provide a new Trial if the last suggested Trial was completed.
SuggestTrialsRequest
Request message for VizierService.SuggestTrials
.
parent
string
Required. The project and location that the Study belongs to. Format: projects/{project}/locations/{location}/studies/{study}
suggestion_count
int32
Required. The number of suggestions requested. It must be positive.
client_id
string
Required. The identifier of the client that is requesting the suggestion.
If multiple SuggestTrialsRequests have the same client_id
, the service will return the identical suggested Trial if the Trial is pending, and provide a new Trial if the last suggested Trial was completed.
Optional. This allows you to specify the "context" for a Trial; a context is a slice (a subspace) of the search space.
Typical uses for contexts: 1) You are using Vizier to tune a server for best performance, but there's a strong weekly cycle. The context specifies the day-of-week. This allows Tuesday to generalize from Wednesday without assuming that everything is identical. 2) Imagine you're optimizing some medical treatment for people. As they walk in the door, you know certain facts about them (e.g. sex, weight, height, blood-pressure). Put that information in the context, and Vizier will adapt its suggestions to the patient. 3) You want to do a fair A/B test efficiently. Specify the "A" and "B" conditions as contexts, and Vizier will generalize between "A" and "B" conditions. If they are similar, this will allow Vizier to converge to the optimum faster than if "A" and "B" were separate Studies. NOTE: You can also enter contexts as REQUESTED Trials, e.g. via the CreateTrial() RPC; that's the asynchronous option where you don't need a close association between contexts and suggestions.
NOTE: All the Parameters you set in a context MUST be defined in the Study. NOTE: You must supply 0 or $suggestion_count contexts. If you don't supply any contexts, Vizier will make suggestions from the full search space specified in the StudySpec; if you supply a full set of context, each suggestion will match the corresponding context. NOTE: A Context with no features set matches anything, and allows suggestions from the full search space. NOTE: Contexts MUST lie within the search space specified in the StudySpec. It's an error if they don't. NOTE: Contexts preferentially match ACTIVE then REQUESTED trials before new suggestions are generated. NOTE: Generation of suggestions involves a match between a Context and (optionally) a REQUESTED trial; if that match is not fully specified, a suggestion will be geneated in the merged subspace.
SuggestTrialsResponse
Response message for VizierService.SuggestTrials
.
A list of Trials.
The state of the Study.
The time at which the operation was started.
The time at which operation processing completed.
SummarizationHelpfulnessInput
Input for summarization helpfulness metric.
Required. Spec for summarization helpfulness score metric.
Required. Summarization helpfulness instance.
SummarizationHelpfulnessInstance
Spec for summarization helpfulness instance.
prediction
string
Required. Output of the evaluated model.
reference
string
Optional. Ground truth used to compare against the prediction.
context
string
Required. Text to be summarized.
instruction
string
Optional. Summarization prompt for LLM.
SummarizationHelpfulnessResult
Spec for summarization helpfulness result.
explanation
string
Output only. Explanation for summarization helpfulness score.
score
float
Output only. Summarization Helpfulness score.
confidence
float
Output only. Confidence for summarization helpfulness score.
SummarizationHelpfulnessSpec
Spec for summarization helpfulness score metric.
use_reference
bool
Optional. Whether to use instance.reference to compute summarization helpfulness.
version
int32
Optional. Which version to use for evaluation.
SummarizationQualityInput
Input for summarization quality metric.
Required. Spec for summarization quality score metric.
Required. Summarization quality instance.
SummarizationQualityInstance
Spec for summarization quality instance.
prediction
string
Required. Output of the evaluated model.
reference
string
Optional. Ground truth used to compare against the prediction.
context
string
Required. Text to be summarized.
instruction
string
Required. Summarization prompt for LLM.
SummarizationQualityResult
Spec for summarization quality result.
explanation
string
Output only. Explanation for summarization quality score.
score
float
Output only. Summarization Quality score.
confidence
float
Output only. Confidence for summarization quality score.
SummarizationQualitySpec
Spec for summarization quality score metric.
use_reference
bool
Optional. Whether to use instance.reference to compute summarization quality.
version
int32
Optional. Which version to use for evaluation.
SummarizationVerbosityInput
Input for summarization verbosity metric.
Required. Spec for summarization verbosity score metric.
Required. Summarization verbosity instance.
SummarizationVerbosityInstance
Spec for summarization verbosity instance.
prediction
string
Required. Output of the evaluated model.
reference
string
Optional. Ground truth used to compare against the prediction.
context
string
Required. Text to be summarized.
instruction
string
Optional. Summarization prompt for LLM.
SummarizationVerbosityResult
Spec for summarization verbosity result.
explanation
string
Output only. Explanation for summarization verbosity score.
score
float
Output only. Summarization Verbosity score.
confidence
float
Output only. Confidence for summarization verbosity score.
SummarizationVerbositySpec
Spec for summarization verbosity score metric.
use_reference
bool
Optional. Whether to use instance.reference to compute summarization verbosity.
version
int32
Optional. Which version to use for evaluation.
SupervisedHyperParameters
Hyperparameters for SFT.
epoch_count
int64
Optional. Number of complete passes the model makes over the entire training dataset during training.
learning_rate_multiplier
double
Optional. Multiplier for adjusting the default learning rate.
Optional. Adapter size for tuning.
AdapterSize
Supported adapter sizes for tuning.
Enums | |
---|---|
ADAPTER_SIZE_UNSPECIFIED |
Adapter size is unspecified. |
ADAPTER_SIZE_ONE |
Adapter size 1. |
ADAPTER_SIZE_FOUR |
Adapter size 4. |
ADAPTER_SIZE_EIGHT |
Adapter size 8. |
ADAPTER_SIZE_SIXTEEN |
Adapter size 16. |
ADAPTER_SIZE_THIRTY_TWO |
Adapter size 32. |
SupervisedTuningDataStats
Tuning data statistics for Supervised Tuning.
tuning_dataset_example_count
int64
Output only. Number of examples in the tuning dataset.
total_tuning_character_count
int64
Output only. Number of tuning characters in the tuning dataset.
total_billable_character_count
(deprecated)
int64
Output only. Number of billable characters in the tuning dataset.
total_billable_token_count
int64
Output only. Number of billable tokens in the tuning dataset.
tuning_step_count
int64
Output only. Number of tuning steps for this Tuning Job.
Output only. Dataset distributions for the user input tokens.
Output only. Dataset distributions for the user output tokens.
Output only. Dataset distributions for the messages per example.
Output only. Sample user messages in the training dataset uri.
SupervisedTuningDatasetDistribution
Dataset distribution for Supervised Tuning.
sum
int64
Output only. Sum of a given population of values.
billable_sum
int64
Output only. Sum of a given population of values that are billable.
min
double
Output only. The minimum of the population values.
max
double
Output only. The maximum of the population values.
mean
double
Output only. The arithmetic mean of the values in the population.
median
double
Output only. The median of the values in the population.
p5
double
Output only. The 5th percentile of the values in the population.
p95
double
Output only. The 95th percentile of the values in the population.
Output only. Defines the histogram bucket.
DatasetBucket
Dataset bucket used to create a histogram for the distribution given a population of values.
count
double
Output only. Number of values in the bucket.
left
double
Output only. Left bound of the bucket.
right
double
Output only. Right bound of the bucket.
SupervisedTuningSpec
Tuning Spec for Supervised Tuning for first party models.
training_dataset_uri
string
Required. Cloud Storage path to file containing training dataset for tuning. The dataset must be formatted as a JSONL file.
validation_dataset_uri
string
Optional. Cloud Storage path to file containing validation dataset for tuning. The dataset must be formatted as a JSONL file.
Optional. Hyperparameters for SFT.
SyncFeatureViewRequest
Request message for FeatureOnlineStoreAdminService.SyncFeatureView
.
feature_view
string
Required. Format: projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}/featureViews/{feature_view}
SyncFeatureViewResponse
Response message for FeatureOnlineStoreAdminService.SyncFeatureView
.
feature_view_sync
string
Format: projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}/featureViews/{feature_view}/featureViewSyncs/{feature_view_sync}
TFRecordDestination
The storage details for TFRecord output content.
Required. Google Cloud Storage location.
Tensor
A tensor value type.
The data type of tensor.
shape[]
int64
Shape of the tensor.
bool_val[]
bool
Type specific representations that make it easy to create tensor protos in all languages. Only the representation corresponding to "dtype" can be set. The values hold the flattened representation of the tensor in row major order.
[BOOL][google.aiplatform.master.Tensor.DataType.BOOL]
string_val[]
string
[STRING][google.aiplatform.master.Tensor.DataType.STRING]
bytes_val[]
bytes
[STRING][google.aiplatform.master.Tensor.DataType.STRING]
float_val[]
float
[FLOAT][google.aiplatform.master.Tensor.DataType.FLOAT]
double_val[]
double
[DOUBLE][google.aiplatform.master.Tensor.DataType.DOUBLE]
int_val[]
int32
[INT_8][google.aiplatform.master.Tensor.DataType.INT8] [INT_16][google.aiplatform.master.Tensor.DataType.INT16] [INT_32][google.aiplatform.master.Tensor.DataType.INT32]
int64_val[]
int64
[INT64][google.aiplatform.master.Tensor.DataType.INT64]
uint_val[]
uint32
[UINT8][google.aiplatform.master.Tensor.DataType.UINT8] [UINT16][google.aiplatform.master.Tensor.DataType.UINT16] [UINT32][google.aiplatform.master.Tensor.DataType.UINT32]
uint64_val[]
uint64
[UINT64][google.aiplatform.master.Tensor.DataType.UINT64]
A list of tensor values.
A map of string to tensor.
tensor_val
bytes
Serialized raw tensor content.
DataType
Data type of the tensor.
Enums | |
---|---|
DATA_TYPE_UNSPECIFIED |
Not a legal value for DataType. Used to indicate a DataType field has not been set. |
BOOL |
Data types that all computation devices are expected to be capable to support. |
STRING |
|
FLOAT |
|
DOUBLE |
|
INT8 |
|
INT16 |
|
INT32 |
|
INT64 |
|
UINT8 |
|
UINT16 |
|
UINT32 |
|
UINT64 |
Tensorboard
Tensorboard is a physical database that stores users' training metrics. A default Tensorboard is provided in each region of a Google Cloud project. If needed users can also create extra Tensorboards in their projects.
name
string
Output only. Name of the Tensorboard. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
display_name
string
Required. User provided name of this Tensorboard.
description
string
Description of this Tensorboard.
Customer-managed encryption key spec for a Tensorboard. If set, this Tensorboard and all sub-resources of this Tensorboard will be secured by this key.
blob_storage_path_prefix
string
Output only. Consumer project Cloud Storage path prefix used to store blob data, which can either be a bucket or directory. Does not end with a '/'.
run_count
int32
Output only. The number of Runs stored in this Tensorboard.
Output only. Timestamp when this Tensorboard was created.
Output only. Timestamp when this Tensorboard was last updated.
labels
map<string, string>
The labels with user-defined metadata to organize your Tensorboards.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Tensorboard (System labels are excluded).
See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable.
etag
string
Used to perform a consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
is_default
bool
Used to indicate if the TensorBoard instance is the default one. Each project & region can have at most one default TensorBoard instance. Creation of a default TensorBoard instance and updating an existing TensorBoard instance to be default will mark all other TensorBoard instances (if any) as non default.
satisfies_pzs
bool
Output only. Reserved for future use.
satisfies_pzi
bool
Output only. Reserved for future use.
TensorboardBlob
One blob (e.g, image, graph) viewable on a blob metric plot.
id
string
Output only. A URI safe key uniquely identifying a blob. Can be used to locate the blob stored in the Cloud Storage bucket of the consumer project.
data
bytes
Optional. The bytes of the blob is not present unless it's returned by the ReadTensorboardBlobData endpoint.
TensorboardBlobSequence
One point viewable on a blob metric plot, but mostly just a wrapper message to work around repeated fields can't be used directly within oneof
fields.
List of blobs contained within the sequence.
TensorboardExperiment
A TensorboardExperiment is a group of TensorboardRuns, that are typically the results of a training job run, in a Tensorboard.
name
string
Output only. Name of the TensorboardExperiment. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}
display_name
string
User provided name of this TensorboardExperiment.
description
string
Description of this TensorboardExperiment.
Output only. Timestamp when this TensorboardExperiment was created.
Output only. Timestamp when this TensorboardExperiment was last updated.
labels
map<string, string>
The labels with user-defined metadata to organize your TensorboardExperiment.
Label keys and values cannot be longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Dataset (System labels are excluded).
See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with aiplatform.googleapis.com/
and are immutable. The following system labels exist for each Dataset:
aiplatform.googleapis.com/dataset_metadata_schema
: output only. Its value is themetadata_schema's
title.
etag
string
Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
source
string
Immutable. Source of the TensorboardExperiment. Example: a custom training job.
TensorboardRun
TensorboardRun maps to a specific execution of a training job with a given set of hyperparameter values, model definition, dataset, etc
name
string
Output only. Name of the TensorboardRun. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}/runs/{run}
display_name
string
Required. User provided name of this TensorboardRun. This value must be unique among all TensorboardRuns belonging to the same parent TensorboardExperiment.
description
string
Description of this TensorboardRun.
Output only. Timestamp when this TensorboardRun was created.
Output only. Timestamp when this TensorboardRun was last updated.
labels
map<string, string>
The labels with user-defined metadata to organize your TensorboardRuns.
This field will be used to filter and visualize Runs in the Tensorboard UI. For example, a Vertex AI training job can set a label aiplatform.googleapis.com/training_job_id=xxxxx to all the runs created within that job. An end user can set a label experiment_id=xxxxx for all the runs produced in a Jupyter notebook. These runs can be grouped by a label value and visualized together in the Tensorboard UI.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one TensorboardRun (System labels are excluded).
See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable.
etag
string
Used to perform a consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
TensorboardTensor
One point viewable on a tensor metric plot.
value
bytes
Required. Serialized form of https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/framework/tensor.proto
version_number
int32
Optional. Version number of TensorProto used to serialize value
.
TensorboardTimeSeries
TensorboardTimeSeries maps to times series produced in training runs
name
string
Output only. Name of the TensorboardTimeSeries.
display_name
string
Required. User provided name of this TensorboardTimeSeries. This value should be unique among all TensorboardTimeSeries resources belonging to the same TensorboardRun resource (parent resource).
description
string
Description of this TensorboardTimeSeries.
Required. Immutable. Type of TensorboardTimeSeries value.
Output only. Timestamp when this TensorboardTimeSeries was created.
Output only. Timestamp when this TensorboardTimeSeries was last updated.
etag
string
Used to perform a consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
plugin_name
string
Immutable. Name of the plugin this time series pertain to. Such as Scalar, Tensor, Blob
plugin_data
bytes
Data of the current plugin, with the size limited to 65KB.
Output only. Scalar, Tensor, or Blob metadata for this TensorboardTimeSeries.
Metadata
Describes metadata for a TensorboardTimeSeries.
max_step
int64
Output only. Max step index of all data points within a TensorboardTimeSeries.
Output only. Max wall clock timestamp of all data points within a TensorboardTimeSeries.
max_blob_sequence_length
int64
Output only. The largest blob sequence length (number of blobs) of all data points in this time series, if its ValueType is BLOB_SEQUENCE.
ValueType
An enum representing the value type of a TensorboardTimeSeries.
Enums | |
---|---|
VALUE_TYPE_UNSPECIFIED |
The value type is unspecified. |
SCALAR |
Used for TensorboardTimeSeries that is a list of scalars. E.g. accuracy of a model over epochs/time. |
TENSOR |
Used for TensorboardTimeSeries that is a list of tensors. E.g. histograms of weights of layer in a model over epoch/time. |
BLOB_SEQUENCE |
Used for TensorboardTimeSeries that is a list of blob sequences. E.g. set of sample images with labels over epochs/time. |
ThresholdConfig
The config for feature monitoring threshold.
Union field threshold
.
threshold
can be only one of the following:
value
double
Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
TimeSeriesData
All the data stored in a TensorboardTimeSeries.
tensorboard_time_series_id
string
Required. The ID of the TensorboardTimeSeries, which will become the final component of the TensorboardTimeSeries' resource name
Required. Immutable. The value type of this time series. All the values in this time series data must match this value type.
Required. Data points in this time series.
TimeSeriesDataPoint
A TensorboardTimeSeries data point.
Wall clock timestamp when this data point is generated by the end user.
step
int64
Step index of this data point within the run.
value
. Value of this time series data point. value
can be only one of the following:A scalar value.
A tensor value.
A blob sequence value.
TimestampSplit
Assigns input data to training, validation, and test sets based on a provided timestamps. The youngest data pieces are assigned to training set, next to validation set, and the oldest to the test set.
Supported only for tabular Datasets.
training_fraction
double
The fraction of the input data that is to be used to train the Model.
validation_fraction
double
The fraction of the input data that is to be used to validate the Model.
test_fraction
double
The fraction of the input data that is to be used to evaluate the Model.
key
string
Required. The key is a name of one of the Dataset's data columns. The values of the key (the values in the column) must be in RFC 3339 date-time
format, where time-offset
= "Z"
(e.g. 1985-04-12T23:20:50.52Z). If for a piece of data the key is not present or has an invalid value, that piece is ignored by the pipeline.
TokensInfo
Tokens info with a list of tokens and the corresponding list of token ids.
tokens[]
bytes
A list of tokens from the input.
token_ids[]
int64
A list of token ids from the input.
role
string
Optional. Optional fields for the role from the corresponding Content.
Tool
Tool details that the model may use to generate response.
A Tool
is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating [FunctionCall][content.part.function_call] in the response. User should provide a [FunctionResponse][content.part.function_response] for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 128 function declarations can be provided.
Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
Optional. GoogleSearchRetrieval tool type. Specialized retrieval tool that is powered by Google search.
ToolCallValidInput
Input for tool call valid metric.
Required. Spec for tool call valid metric.
Required. Repeated tool call valid instances.
ToolCallValidInstance
Spec for tool call valid instance.
prediction
string
Required. Output of the evaluated model.
reference
string
Required. Ground truth used to compare against the prediction.
ToolCallValidMetricValue
Tool call valid metric value for an instance.
score
float
Output only. Tool call valid score.
ToolCallValidResults
Results for tool call valid metric.
Output only. Tool call valid metric values.
ToolCallValidSpec
This type has no fields.
Spec for tool call valid metric.
ToolConfig
Tool config. This config is shared for all tools provided in the request.
Optional. Function calling config.
ToolNameMatchInput
Input for tool name match metric.
Required. Spec for tool name match metric.
Required. Repeated tool name match instances.
ToolNameMatchInstance
Spec for tool name match instance.
prediction
string
Required. Output of the evaluated model.
reference
string
Required. Ground truth used to compare against the prediction.
ToolNameMatchMetricValue
Tool name match metric value for an instance.
score
float
Output only. Tool name match score.
ToolNameMatchResults
Results for tool name match metric.
Output only. Tool name match metric values.
ToolNameMatchSpec
This type has no fields.
Spec for tool name match metric.
ToolParameterKVMatchInput
Input for tool parameter key value match metric.
Required. Spec for tool parameter key value match metric.
Required. Repeated tool parameter key value match instances.
ToolParameterKVMatchInstance
Spec for tool parameter key value match instance.
prediction
string
Required. Output of the evaluated model.
reference
string
Required. Ground truth used to compare against the prediction.
ToolParameterKVMatchMetricValue
Tool parameter key value match metric value for an instance.
score
float
Output only. Tool parameter key value match score.
ToolParameterKVMatchResults
Results for tool parameter key value match metric.
Output only. Tool parameter key value match metric values.
ToolParameterKVMatchSpec
Spec for tool parameter key value match metric.
use_strict_string_match
bool
Optional. Whether to use STRICT string match on parameter values.
ToolParameterKeyMatchInput
Input for tool parameter key match metric.
Required. Spec for tool parameter key match metric.
Required. Repeated tool parameter key match instances.
ToolParameterKeyMatchInstance
Spec for tool parameter key match instance.
prediction
string
Required. Output of the evaluated model.
reference
string
Required. Ground truth used to compare against the prediction.
ToolParameterKeyMatchMetricValue
Tool parameter key match metric value for an instance.
score
float
Output only. Tool parameter key match score.
ToolParameterKeyMatchResults
Results for tool parameter key match metric.
Output only. Tool parameter key match metric values.
ToolParameterKeyMatchSpec
This type has no fields.
Spec for tool parameter key match metric.
TrainingPipeline
The TrainingPipeline orchestrates tasks associated with training a Model. It always executes the training task, and optionally may also export data from Vertex AI's Dataset which becomes the training input, upload
the Model to Vertex AI, and evaluate the Model.
name
string
Output only. Resource name of the TrainingPipeline.
display_name
string
Required. The user-defined name of this TrainingPipeline.
Specifies Vertex AI owned input data that may be used for training the Model. The TrainingPipeline's training_task_definition
should make clear whether this config is used and if there are any special requirements on how it should be filled. If nothing about this config is mentioned in the training_task_definition
, then it should be assumed that the TrainingPipeline does not depend on this configuration.
training_task_definition
string
Required. A Google Cloud Storage path to the YAML file that defines the training task which is responsible for producing the model artifact, and may also include additional auxiliary work. The definition files that can be used here are found in gs://google-cloud-aiplatform/schema/trainingjob/definition/. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
Required. The training task's parameter(s), as specified in the training_task_definition
's inputs
.
Output only. The metadata information as specified in the training_task_definition
's metadata
. This metadata is an auxiliary runtime and final information about the training task. While the pipeline is running this information is populated only at a best effort basis. Only present if the pipeline's training_task_definition
contains metadata
object.
Describes the Model that may be uploaded (via ModelService.UploadModel
) by this TrainingPipeline. The TrainingPipeline's training_task_definition
should make clear whether this Model description should be populated, and if there are any special requirements regarding how it should be filled. If nothing is mentioned in the training_task_definition
, then it should be assumed that this field should not be filled and the training task either uploads the Model without a need of this information, or that training task does not support uploading a Model as part of the pipeline. When the Pipeline's state becomes PIPELINE_STATE_SUCCEEDED
and the trained Model had been uploaded into Vertex AI, then the model_to_upload's resource name
is populated. The Model is always uploaded into the Project and Location in which this pipeline is.
model_id
string
Optional. The ID to use for the uploaded Model, which will become the final component of the model resource name.
This value may be up to 63 characters, and valid characters are [a-z0-9_-]
. The first character cannot be a number or hyphen.
parent_model
string
Optional. When specify this field, the model_to_upload
will not be uploaded as a new model, instead, it will become a new version of this parent_model
.
Output only. The detailed state of the pipeline.
Output only. Only populated when the pipeline's state is PIPELINE_STATE_FAILED
or PIPELINE_STATE_CANCELLED
.
Output only. Time when the TrainingPipeline was created.
Output only. Time when the TrainingPipeline for the first time entered the PIPELINE_STATE_RUNNING
state.
Output only. Time when the TrainingPipeline entered any of the following states: PIPELINE_STATE_SUCCEEDED
, PIPELINE_STATE_FAILED
, PIPELINE_STATE_CANCELLED
.
Output only. Time when the TrainingPipeline was most recently updated.
labels
map<string, string>
The labels with user-defined metadata to organize TrainingPipelines.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information and examples of labels.
Customer-managed encryption key spec for a TrainingPipeline. If set, this TrainingPipeline will be secured by this key.
Note: Model trained by this TrainingPipeline is also secured by this key if model_to_upload
is not set separately.
Trial
A message representing a Trial. A Trial contains a unique set of Parameters that has been or will be evaluated, along with the objective metrics got by running the Trial.
name
string
Output only. Resource name of the Trial assigned by the service.
id
string
Output only. The identifier of the Trial assigned by the service.
Output only. The detailed state of the Trial.
Output only. The parameters of the Trial.
Output only. The final measurement containing the objective value.
Output only. A list of measurements that are strictly lexicographically ordered by their induced tuples (steps, elapsed_duration). These are used for early stopping computations.
Output only. Time when the Trial was started.
Output only. Time when the Trial's status changed to SUCCEEDED
or INFEASIBLE
.
client_id
string
Output only. The identifier of the client that originally requested this Trial. Each client is identified by a unique client_id. When a client asks for a suggestion, Vertex AI Vizier will assign it a Trial. The client should evaluate the Trial, complete it, and report back to Vertex AI Vizier. If suggestion is asked again by same client_id before the Trial is completed, the same Trial will be returned. Multiple clients with different client_ids can ask for suggestions simultaneously, each of them will get their own Trial.
infeasible_reason
string
Output only. A human readable string describing why the Trial is infeasible. This is set only if Trial state is INFEASIBLE
.
custom_job
string
Output only. The CustomJob name linked to the Trial. It's set for a HyperparameterTuningJob's Trial.
web_access_uris
map<string, string>
Output only. URIs for accessing interactive shells (one URI for each training node). Only available if this trial is part of a HyperparameterTuningJob
and the job's trial_job_spec.enable_web_access
field is true
.
The keys are names of each node used for the trial; for example, workerpool0-0
for the primary node, workerpool1-0
for the first node in the second worker pool, and workerpool1-1
for the second node in the second worker pool.
The values are the URIs for each node's interactive shell.
Parameter
A message representing a parameter to be tuned.
parameter_id
string
Output only. The ID of the parameter. The parameter should be defined in StudySpec's Parameters
.
Output only. The value of the parameter. number_value
will be set if a parameter defined in StudySpec is in type 'INTEGER', 'DOUBLE' or 'DISCRETE'. string_value
will be set if a parameter defined in StudySpec is in type 'CATEGORICAL'.
State
Describes a Trial state.
Enums | |
---|---|
STATE_UNSPECIFIED |
The Trial state is unspecified. |
REQUESTED |
Indicates that a specific Trial has been requested, but it has not yet been suggested by the service. |
ACTIVE |
Indicates that the Trial has been suggested. |
STOPPING |
Indicates that the Trial should stop according to the service. |
SUCCEEDED |
Indicates that the Trial is completed successfully. |
INFEASIBLE |
Indicates that the Trial should not be attempted again. The service will set a Trial to INFEASIBLE when it's done but missing the final_measurement. |
TrialContext
description
string
A human-readable field which can store a description of this context. This will become part of the resulting Trial's description field.
If/when a Trial is generated or selected from this Context, its Parameters will match any parameters specified here. (I.e. if this context specifies parameter name:'a' int_value:3, then a resulting Trial will have int_value:3 for its parameter named 'a'.) Note that we first attempt to match existing REQUESTED Trials with contexts, and if there are no matches, we generate suggestions in the subspace defined by the parameters specified here. NOTE: a Context without any Parameters matches the entire feasible search space.
TunedModel
The Model Registry Model and Online Prediction Endpoint assiociated with this TuningJob
.
model
string
Output only. The resource name of the TunedModel. Format: projects/{project}/locations/{location}/models/{model}
.
endpoint
string
Output only. A resource name of an Endpoint. Format: projects/{project}/locations/{location}/endpoints/{endpoint}
.
TunedModelRef
TunedModel Reference for legacy model migration.
tuned_model_ref
. The Tuned Model Reference for the model. tuned_model_ref
can be only one of the following:tuned_model
string
Support migration from model registry.
tuning_job
string
Support migration from tuning job list page, from gemini-1.0-pro-002 to 1.5 and above.
pipeline_job
string
Support migration from tuning job list page, from bison model to gemini model.
TuningDataStats
The tuning data statistic values for TuningJob
.
Union field tuning_data_stats
.
tuning_data_stats
can be only one of the following:
The SFT Tuning data stats.
TuningJob
Represents a TuningJob that runs with Google owned models.
name
string
Output only. Identifier. Resource name of a TuningJob. Format: projects/{project}/locations/{location}/tuningJobs/{tuning_job}
tuned_model_display_name
string
Optional. The display name of the TunedModel
. The name can be up to 128 characters long and can consist of any UTF-8 characters.
description
string
Optional. The description of the TuningJob
.
Output only. The detailed state of the job.
Output only. Only populated when job's state is JOB_STATE_FAILED
or JOB_STATE_CANCELLED
.
labels
map<string, string>
Optional. The labels with user-defined metadata to organize TuningJob
and generated resources such as Model
and Endpoint
.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information and examples of labels.
experiment
string
Output only. The Experiment associated with this TuningJob
.
Output only. The tuned model resources assiociated with this TuningJob
.
Output only. The tuning data statistics associated with this TuningJob
.
Customer-managed encryption key options for a TuningJob. If this is set, then all resources created by the TuningJob will be encrypted with the provided encryption key.
Union field source_model
.
source_model
can be only one of the following:
base_model
string
The base model that is being tuned, e.g., "gemini-1.0-pro-002".
Union field tuning_spec
.
tuning_spec
can be only one of the following:
Tuning Spec for Supervised Fine Tuning.
Type
Type contains the list of OpenAPI data types as defined by https://swagger.io/docs/specification/data-models/data-types/
Enums | |
---|---|
TYPE_UNSPECIFIED |
Not specified, should not be used. |
STRING |
OpenAPI string type |
NUMBER |
OpenAPI number type |
INTEGER |
OpenAPI integer type |
BOOLEAN |
OpenAPI boolean type |
ARRAY |
OpenAPI array type |
OBJECT |
OpenAPI object type |
UndeployIndexOperationMetadata
Runtime operation information for IndexEndpointService.UndeployIndex
.
The operation generic information.
UndeployIndexRequest
Request message for IndexEndpointService.UndeployIndex
.
index_endpoint
string
Required. The name of the IndexEndpoint resource from which to undeploy an Index. Format: projects/{project}/locations/{location}/indexEndpoints/{index_endpoint}
deployed_index_id
string
Required. The ID of the DeployedIndex to be undeployed from the IndexEndpoint.
UndeployIndexResponse
This type has no fields.
Response message for IndexEndpointService.UndeployIndex
.
UndeployModelOperationMetadata
Runtime operation information for EndpointService.UndeployModel
.
The operation generic information.
UndeployModelRequest
Request message for EndpointService.UndeployModel
.
endpoint
string
Required. The name of the Endpoint resource from which to undeploy a Model. Format: projects/{project}/locations/{location}/endpoints/{endpoint}
deployed_model_id
string
Required. The ID of the DeployedModel to be undeployed from the Endpoint.
traffic_split
map<string, int32>
If this field is provided, then the Endpoint's traffic_split
will be overwritten with it. If last DeployedModel is being undeployed from the Endpoint, the [Endpoint.traffic_split] will always end up empty when this call returns. A DeployedModel will be successfully undeployed only if it doesn't have any traffic assigned to it when this method executes, or if this field unassigns any traffic to it.
UndeployModelResponse
This type has no fields.
Response message for EndpointService.UndeployModel
.
UnmanagedContainerModel
Contains model information necessary to perform batch prediction without requiring a full model import.
artifact_uri
string
The path to the directory containing the Model artifact and any of its supporting files.
Contains the schemata used in Model's predictions and explanations
Input only. The specification of the container that is to be used when deploying this Model.
UpdateArtifactRequest
Request message for MetadataService.UpdateArtifact
.
Required. The Artifact containing updates. The Artifact's Artifact.name
field is used to identify the Artifact to be updated. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/artifacts/{artifact}
Optional. A FieldMask indicating which fields should be updated.
UpdateContextRequest
Request message for MetadataService.UpdateContext
.
Required. The Context containing updates. The Context's Context.name
field is used to identify the Context to be updated. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/contexts/{context}
Optional. A FieldMask indicating which fields should be updated.
UpdateDatasetRequest
Request message for DatasetService.UpdateDataset
.
Required. The Dataset which replaces the resource on the server.
Required. The update mask applies to the resource. For the FieldMask
definition, see google.protobuf.FieldMask
. Updatable fields:
display_name
description
labels
UpdateDatasetVersionRequest
Request message for DatasetService.UpdateDatasetVersion
.
Required. The DatasetVersion which replaces the resource on the server.
Required. The update mask applies to the resource. For the FieldMask
definition, see google.protobuf.FieldMask
. Updatable fields:
display_name
UpdateDeploymentResourcePoolOperationMetadata
Runtime operation information for UpdateDeploymentResourcePool method.
The operation generic information.
UpdateDeploymentResourcePoolRequest
Request message for UpdateDeploymentResourcePool method.
Required. The DeploymentResourcePool to update.
The DeploymentResourcePool's name
field is used to identify the DeploymentResourcePool to update. Format: projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}
Required. The list of fields to update.
UpdateEndpointRequest
Request message for EndpointService.UpdateEndpoint
.
Required. The Endpoint which replaces the resource on the server.
Required. The update mask applies to the resource. See google.protobuf.FieldMask
.
UpdateEntityTypeRequest
Request message for FeaturestoreService.UpdateEntityType
.
Required. The EntityType's name
field is used to identify the EntityType to be updated. Format: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entity_type}
Field mask is used to specify the fields to be overwritten in the EntityType resource by the update. The fields specified in the update_mask are relative to the resource, not the full request. A field will be overwritten if it is in the mask. If the user does not provide a mask then only the non-empty fields present in the request will be overwritten. Set the update_mask to *
to override all fields.
Updatable fields:
description
labels
monitoring_config.snapshot_analysis.disabled
monitoring_config.snapshot_analysis.monitoring_interval_days
monitoring_config.snapshot_analysis.staleness_days
monitoring_config.import_features_analysis.state
monitoring_config.import_features_analysis.anomaly_detection_baseline
monitoring_config.numerical_threshold_config.value
monitoring_config.categorical_threshold_config.value
offline_storage_ttl_days
UpdateExecutionRequest
Request message for MetadataService.UpdateExecution
.
Required. The Execution containing updates. The Execution's Execution.name
field is used to identify the Execution to be updated. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/executions/{execution}
Optional. A FieldMask indicating which fields should be updated.
UpdateExplanationDatasetOperationMetadata
Runtime operation information for ModelService.UpdateExplanationDataset
.
The common part of the operation metadata.
UpdateExplanationDatasetRequest
Request message for ModelService.UpdateExplanationDataset
.
model
string
Required. The resource name of the Model to update. Format: projects/{project}/locations/{location}/models/{model}
The example config containing the location of the dataset.
UpdateExplanationDatasetResponse
This type has no fields.
Response message of ModelService.UpdateExplanationDataset
operation.
UpdateFeatureGroupOperationMetadata
Details of operations that perform update FeatureGroup.
Operation metadata for FeatureGroup.
UpdateFeatureGroupRequest
Request message for FeatureRegistryService.UpdateFeatureGroup
.
Required. The FeatureGroup's name
field is used to identify the FeatureGroup to be updated. Format: projects/{project}/locations/{location}/featureGroups/{feature_group}
Field mask is used to specify the fields to be overwritten in the FeatureGroup resource by the update. The fields specified in the update_mask are relative to the resource, not the full request. A field will be overwritten if it is in the mask. If the user does not provide a mask then only the non-empty fields present in the request will be overwritten. Set the update_mask to *
to override all fields.
Updatable fields:
labels
description
big_query
big_query.entity_id_columns
UpdateFeatureOnlineStoreOperationMetadata
Details of operations that perform update FeatureOnlineStore.
Operation metadata for FeatureOnlineStore.
UpdateFeatureOnlineStoreRequest
Request message for FeatureOnlineStoreAdminService.UpdateFeatureOnlineStore
.
Required. The FeatureOnlineStore's name
field is used to identify the FeatureOnlineStore to be updated. Format: projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}
Field mask is used to specify the fields to be overwritten in the FeatureOnlineStore resource by the update. The fields specified in the update_mask are relative to the resource, not the full request. A field will be overwritten if it is in the mask. If the user does not provide a mask then only the non-empty fields present in the request will be overwritten. Set the update_mask to *
to override all fields.
Updatable fields:
labels
description
bigtable
bigtable.auto_scaling
bigtable.enable_multi_region_replica
UpdateFeatureOperationMetadata
Details of operations that perform update Feature.
Operation metadata for Feature Update.
UpdateFeatureRequest
Request message for FeaturestoreService.UpdateFeature
. Request message for FeatureRegistryService.UpdateFeature
.
Required. The Feature's name
field is used to identify the Feature to be updated. Format: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entity_type}/features/{feature}
projects/{project}/locations/{location}/featureGroups/{feature_group}/features/{feature}
Field mask is used to specify the fields to be overwritten in the Features resource by the update. The fields specified in the update_mask are relative to the resource, not the full request. A field will be overwritten if it is in the mask. If the user does not provide a mask then only the non-empty fields present in the request will be overwritten. Set the update_mask to *
to override all fields.
Updatable fields:
description
labels
disable_monitoring
(Not supported for FeatureRegistryService Feature)point_of_contact
(Not supported for FeaturestoreService FeatureStore)
UpdateFeatureViewOperationMetadata
Details of operations that perform update FeatureView.
Operation metadata for FeatureView Update.
UpdateFeatureViewRequest
Request message for FeatureOnlineStoreAdminService.UpdateFeatureView
.
Required. The FeatureView's name
field is used to identify the FeatureView to be updated. Format: projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}/featureViews/{feature_view}
Field mask is used to specify the fields to be overwritten in the FeatureView resource by the update. The fields specified in the update_mask are relative to the resource, not the full request. A field will be overwritten if it is in the mask. If the user does not provide a mask then only the non-empty fields present in the request will be overwritten. Set the update_mask to *
to override all fields.
Updatable fields:
labels
service_agent_type
big_query_source
big_query_source.uri
big_query_source.entity_id_columns
feature_registry_source
feature_registry_source.feature_groups
sync_config
sync_config.cron
UpdateFeaturestoreOperationMetadata
Details of operations that perform update Featurestore.
Operation metadata for Featurestore.
UpdateFeaturestoreRequest
Request message for FeaturestoreService.UpdateFeaturestore
.
Required. The Featurestore's name
field is used to identify the Featurestore to be updated. Format: projects/{project}/locations/{location}/featurestores/{featurestore}
Field mask is used to specify the fields to be overwritten in the Featurestore resource by the update. The fields specified in the update_mask are relative to the resource, not the full request. A field will be overwritten if it is in the mask. If the user does not provide a mask then only the non-empty fields present in the request will be overwritten. Set the update_mask to *
to override all fields.
Updatable fields:
labels
online_serving_config.fixed_node_count
online_serving_config.scaling
online_storage_ttl_days
UpdateIndexEndpointRequest
Request message for IndexEndpointService.UpdateIndexEndpoint
.
Required. The IndexEndpoint which replaces the resource on the server.
Required. The update mask applies to the resource. See google.protobuf.FieldMask
.
UpdateIndexOperationMetadata
Runtime operation information for IndexService.UpdateIndex
.
The operation generic information.
The operation metadata with regard to Matching Engine Index operation.
UpdateIndexRequest
Request message for IndexService.UpdateIndex
.
Required. The Index which updates the resource on the server.
The update mask applies to the resource. For the FieldMask
definition, see google.protobuf.FieldMask
.
UpdateModelDeploymentMonitoringJobOperationMetadata
Runtime operation information for JobService.UpdateModelDeploymentMonitoringJob
.
The operation generic information.
UpdateModelDeploymentMonitoringJobRequest
Request message for JobService.UpdateModelDeploymentMonitoringJob
.
Required. The model monitoring configuration which replaces the resource on the server.
Required. The update mask is used to specify the fields to be overwritten in the ModelDeploymentMonitoringJob resource by the update. The fields specified in the update_mask are relative to the resource, not the full request. A field will be overwritten if it is in the mask. If the user does not provide a mask then only the non-empty fields present in the request will be overwritten. Set the update_mask to *
to override all fields. For the objective config, the user can either provide the update mask for model_deployment_monitoring_objective_configs or any combination of its nested fields, such as: model_deployment_monitoring_objective_configs.objective_config.training_dataset.
Updatable fields:
display_name
model_deployment_monitoring_schedule_config
model_monitoring_alert_config
logging_sampling_strategy
labels
log_ttl
enable_monitoring_pipeline_logs
. andmodel_deployment_monitoring_objective_configs
. ormodel_deployment_monitoring_objective_configs.objective_config.training_dataset
model_deployment_monitoring_objective_configs.objective_config.training_prediction_skew_detection_config
model_deployment_monitoring_objective_configs.objective_config.prediction_drift_detection_config
UpdateModelRequest
Request message for ModelService.UpdateModel
.
Required. The Model which replaces the resource on the server. When Model Versioning is enabled, the model.name will be used to determine whether to update the model or model version. 1. model.name with the @ value, e.g. models/123@1, refers to a version specific update. 2. model.name without the @ value, e.g. models/123, refers to a model update. 3. model.name with @-, e.g. models/123@-, refers to a model update. 4. Supported model fields: display_name, description; supported version-specific fields: version_description. Labels are supported in both scenarios. Both the model labels and the version labels are merged when a model is returned. When updating labels, if the request is for model-specific update, model label gets updated. Otherwise, version labels get updated. 5. A model name or model version name fields update mismatch will cause a precondition error. 6. One request cannot update both the model and the version fields. You must update them separately.
Required. The update mask applies to the resource. For the FieldMask
definition, see google.protobuf.FieldMask
.
UpdateNotebookRuntimeTemplateRequest
Request message for NotebookService.UpdateNotebookRuntimeTemplate
.
Required. The NotebookRuntimeTemplate to update.
Required. The update mask applies to the resource. For the FieldMask
definition, see google.protobuf.FieldMask
. Input format: {paths: "${updated_filed}"}
Updatable fields:
encryption_spec.kms_key_name
UpdatePersistentResourceOperationMetadata
Details of operations that perform update PersistentResource.
Operation metadata for PersistentResource.
progress_message
string
Progress Message for Update LRO
UpdatePersistentResourceRequest
Request message for UpdatePersistentResource method.
Required. The PersistentResource to update.
The PersistentResource's name
field is used to identify the PersistentResource to update. Format: projects/{project}/locations/{location}/persistentResources/{persistent_resource}
Required. Specify the fields to be overwritten in the PersistentResource by the update method.
UpdateScheduleRequest
Request message for ScheduleService.UpdateSchedule
.
Required. The Schedule which replaces the resource on the server. The following restrictions will be applied:
- The scheduled request type cannot be changed.
- The non-empty fields cannot be unset.
- The output_only fields will be ignored if specified.
Required. The update mask applies to the resource. See google.protobuf.FieldMask
.
UpdateSpecialistPoolOperationMetadata
Runtime operation metadata for SpecialistPoolService.UpdateSpecialistPool
.
specialist_pool
string
Output only. The name of the SpecialistPool to which the specialists are being added. Format: projects/{project_id}/locations/{location_id}/specialistPools/{specialist_pool}
The operation generic information.
UpdateSpecialistPoolRequest
Request message for SpecialistPoolService.UpdateSpecialistPool
.
Required. The SpecialistPool which replaces the resource on the server.
Required. The update mask applies to the resource.
UpdateTensorboardExperimentRequest
Request message for TensorboardService.UpdateTensorboardExperiment
.
Required. Field mask is used to specify the fields to be overwritten in the TensorboardExperiment resource by the update. The fields specified in the update_mask are relative to the resource, not the full request. A field is overwritten if it's in the mask. If the user does not provide a mask then all fields are overwritten if new values are specified.
Required. The TensorboardExperiment's name
field is used to identify the TensorboardExperiment to be updated. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}
UpdateTensorboardOperationMetadata
Details of operations that perform update Tensorboard.
Operation metadata for Tensorboard.
UpdateTensorboardRequest
Request message for TensorboardService.UpdateTensorboard
.
Required. Field mask is used to specify the fields to be overwritten in the Tensorboard resource by the update. The fields specified in the update_mask are relative to the resource, not the full request. A field is overwritten if it's in the mask. If the user does not provide a mask then all fields are overwritten if new values are specified.
Required. The Tensorboard's name
field is used to identify the Tensorboard to be updated. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
UpdateTensorboardRunRequest
Request message for TensorboardService.UpdateTensorboardRun
.
Required. Field mask is used to specify the fields to be overwritten in the TensorboardRun resource by the update. The fields specified in the update_mask are relative to the resource, not the full request. A field is overwritten if it's in the mask. If the user does not provide a mask then all fields are overwritten if new values are specified.
Required. The TensorboardRun's name
field is used to identify the TensorboardRun to be updated. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}/runs/{run}
UpdateTensorboardTimeSeriesRequest
Request message for TensorboardService.UpdateTensorboardTimeSeries
.
Required. Field mask is used to specify the fields to be overwritten in the TensorboardTimeSeries resource by the update. The fields specified in the update_mask are relative to the resource, not the full request. A field is overwritten if it's in the mask. If the user does not provide a mask then all fields are overwritten if new values are specified.
Required. The TensorboardTimeSeries' name
field is used to identify the TensorboardTimeSeries to be updated. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}/runs/{run}/timeSeries/{time_series}
UpgradeNotebookRuntimeOperationMetadata
Metadata information for NotebookService.UpgradeNotebookRuntime
.
The operation generic information.
progress_message
string
A human-readable message that shows the intermediate progress details of NotebookRuntime.
UpgradeNotebookRuntimeRequest
Request message for NotebookService.UpgradeNotebookRuntime
.
name
string
Required. The name of the NotebookRuntime resource to be upgrade. Instead of checking whether the name is in valid NotebookRuntime resource name format, directly throw NotFound exception if there is no such NotebookRuntime in spanner.
UpgradeNotebookRuntimeResponse
This type has no fields.
Response message for NotebookService.UpgradeNotebookRuntime
.
UploadModelOperationMetadata
Details of ModelService.UploadModel
operation.
The common part of the operation metadata.
UploadModelRequest
Request message for ModelService.UploadModel
.
parent
string
Required. The resource name of the Location into which to upload the Model. Format: projects/{project}/locations/{location}
parent_model
string
Optional. The resource name of the model into which to upload the version. Only specify this field when uploading a new version.
model_id
string
Optional. The ID to use for the uploaded Model, which will become the final component of the model resource name.
This value may be up to 63 characters, and valid characters are [a-z0-9_-]
. The first character cannot be a number or hyphen.
Required. The Model to create.
service_account
string
Optional. The user-provided custom service account to use to do the model upload. If empty, Vertex AI Service Agent will be used to access resources needed to upload the model. This account must belong to the target project where the model is uploaded to, i.e., the project specified in the parent
field of this request and have necessary read permissions (to Google Cloud Storage, Artifact Registry, etc.).
UploadModelResponse
Response message of ModelService.UploadModel
operation.
model
string
The name of the uploaded Model resource. Format: projects/{project}/locations/{location}/models/{model}
model_version_id
string
Output only. The version ID of the model that is uploaded.
UpsertDatapointsRequest
Request message for IndexService.UpsertDatapoints
index
string
Required. The name of the Index resource to be updated. Format: projects/{project}/locations/{location}/indexes/{index}
A list of datapoints to be created/updated.
Optional. Update mask is used to specify the fields to be overwritten in the datapoints by the update. The fields specified in the update_mask are relative to each IndexDatapoint inside datapoints, not the full request.
Updatable fields:
- Use
all_restricts
to update both restricts and numeric_restricts.
UpsertDatapointsResponse
This type has no fields.
Response message for IndexService.UpsertDatapoints
UserActionReference
References an API call. It contains more information about long running operation and Jobs that are triggered by the API call.
method
string
The method name of the API RPC call. For example, "/google.cloud.aiplatform.{apiVersion}.DatasetService.CreateDataset"
Union field reference
.
reference
can be only one of the following:
operation
string
For API calls that return a long running operation. Resource name of the long running operation. Format: projects/{project}/locations/{location}/operations/{operation}
data_labeling_job
string
For API calls that start a LabelingJob. Resource name of the LabelingJob. Format: projects/{project}/locations/{location}/dataLabelingJobs/{data_labeling_job}
Value
Value is the value of the field.
Union field value
.
value
can be only one of the following:
int_value
int64
An integer value.
double_value
double
A double value.
string_value
string
A string value.
VertexAISearch
Retrieve from Vertex AI Search datastore for grounding. See https://cloud.google.com/products/agent-builder
datastore
string
Required. Fully-qualified Vertex AI Search data store resource ID. Format: projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}
VideoMetadata
WorkerPoolSpec
Represents the spec of a worker pool in a job.
Optional. Immutable. The specification of a single machine.
replica_count
int64
Optional. The number of worker replicas to use for this worker pool.
Optional. List of NFS mount spec.
Disk spec.
task
. The custom task to be executed in this worker pool. task
can be only one of the following:The custom container task.
The Python packaged task.
WriteFeatureValuesPayload
Contains Feature values to be written for a specific entity.
entity_id
string
Required. The ID of the entity.
Required. Feature values to be written, mapping from Feature ID to value. Up to 100,000 feature_values
entries may be written across all payloads. The feature generation time, aligned by days, must be no older than five years (1825 days) and no later than one year (366 days) in the future.
WriteFeatureValuesRequest
Request message for FeaturestoreOnlineServingService.WriteFeatureValues
.
entity_type
string
Required. The resource name of the EntityType for the entities being written. Value format: projects/{project}/locations/{location}/featurestores/
{featurestore}/entityTypes/{entityType}
. For example, for a machine learning model predicting user clicks on a website, an EntityType ID could be user
.
Required. The entities to be written. Up to 100,000 feature values can be written across all payloads
.
WriteFeatureValuesResponse
This type has no fields.
Response message for FeaturestoreOnlineServingService.WriteFeatureValues
.
WriteTensorboardExperimentDataRequest
Request message for TensorboardService.WriteTensorboardExperimentData
.
tensorboard_experiment
string
Required. The resource name of the TensorboardExperiment to write data to. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}
Required. Requests containing per-run TensorboardTimeSeries data to write.
WriteTensorboardExperimentDataResponse
This type has no fields.
Response message for TensorboardService.WriteTensorboardExperimentData
.
WriteTensorboardRunDataRequest
Request message for TensorboardService.WriteTensorboardRunData
.
tensorboard_run
string
Required. The resource name of the TensorboardRun to write data to. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}/runs/{run}
Required. The TensorboardTimeSeries data to write. Values with in a time series are indexed by their step value. Repeated writes to the same step will overwrite the existing value for that step. The upper limit of data points per write request is 5000.
WriteTensorboardRunDataResponse
This type has no fields.
Response message for TensorboardService.WriteTensorboardRunData
.
XraiAttribution
An explanation method that redistributes Integrated Gradients attributions to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825
Supported only by image Models.
step_count
int32
Required. The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range.
Valid range of its value is [1, 100], inclusively.
Config for SmoothGrad approximation of gradients.
When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf
Config for XRAI with blur baseline.
When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383