- 1.71.0 (latest)
- 1.70.0
- 1.69.0
- 1.68.0
- 1.67.1
- 1.66.0
- 1.65.0
- 1.63.0
- 1.62.0
- 1.60.0
- 1.59.0
- 1.58.0
- 1.57.0
- 1.56.0
- 1.55.0
- 1.54.1
- 1.53.0
- 1.52.0
- 1.51.0
- 1.50.0
- 1.49.0
- 1.48.0
- 1.47.0
- 1.46.0
- 1.45.0
- 1.44.0
- 1.43.0
- 1.39.0
- 1.38.1
- 1.37.0
- 1.36.4
- 1.35.0
- 1.34.0
- 1.33.1
- 1.32.0
- 1.31.1
- 1.30.1
- 1.29.0
- 1.28.1
- 1.27.1
- 1.26.1
- 1.25.0
- 1.24.1
- 1.23.0
- 1.22.1
- 1.21.0
- 1.20.0
- 1.19.1
- 1.18.3
- 1.17.1
- 1.16.1
- 1.15.1
- 1.14.0
- 1.13.1
- 1.12.1
- 1.11.0
- 1.10.0
- 1.9.0
- 1.8.1
- 1.7.1
- 1.6.2
- 1.5.0
- 1.4.3
- 1.3.0
- 1.2.0
- 1.1.1
- 1.0.1
- 0.9.0
- 0.8.0
- 0.7.1
- 0.6.0
- 0.5.1
- 0.4.0
- 0.3.1
API documentation for aiplatform
package.
Classes
Artifact
Metadata Artifact resource for Vertex AI
AutoMLForecastingTrainingJob
Constructs a Forecasting Training Job.
AutoMLImageTrainingJob
Constructs a AutoML Image Training Job.
AutoMLTabularTrainingJob
Constructs a AutoML Tabular Training Job.
Example usage:
job = training_jobs.AutoMLTabularTrainingJob( display_name="my_display_name", optimization_prediction_type="classification", optimization_objective="minimize-log-loss", column_specs={"column_1": "auto", "column_2": "numeric"}, labels={'key': 'value'}, )
AutoMLTextTrainingJob
Constructs a AutoML Text Training Job.
AutoMLVideoTrainingJob
Constructs a AutoML Video Training Job.
BatchPredictionJob
Retrieves a BatchPredictionJob resource and instantiates its representation.
CustomContainerTrainingJob
Class to launch a Custom Training Job in Vertex AI using a Container.
CustomJob
Vertex AI Custom Job.
CustomPythonPackageTrainingJob
Class to launch a Custom Training Job in Vertex AI using a Python Package.
Takes a training implementation as a python package and executes that package in Cloud Vertex AI Training.
CustomTrainingJob
Class to launch a Custom Training Job in Vertex AI using a script.
Takes a training implementation as a python script and executes that script in Cloud Vertex AI Training.
Endpoint
Retrieves an endpoint resource.
EntityType
Managed entityType resource for Vertex AI.
Execution
Metadata Execution resource for Vertex AI
Experiment
Represents a Vertex AI Experiment resource.
ExperimentRun
A Vertex AI Experiment run
Feature
Managed feature resource for Vertex AI.
Featurestore
Managed featurestore resource for Vertex AI.
HyperparameterTuningJob
Vertex AI Hyperparameter Tuning Job.
ImageDataset
Managed image dataset resource for Vertex AI.
MatchingEngineIndex
Matching Engine index resource for Vertex AI.
MatchingEngineIndexEndpoint
Matching Engine index endpoint resource for Vertex AI.
Model
Retrieves the model resource and instantiates its representation.
ModelDeploymentMonitoringJob
Vertex AI Model Deployment Monitoring Job.
This class should be used in conjunction with the Endpoint class in order to configure model monitoring for deployed models.
ModelEvaluation
Retrieves the ModelEvaluation resource and instantiates its representation.
PipelineJob
Retrieves a PipelineJob resource and instantiates its representation.
PrivateEndpoint
Represents a Vertex AI PrivateEndpoint resource.
SequenceToSequencePlusForecastingTrainingJob
Constructs a Forecasting Training Job.
TabularDataset
Managed tabular dataset resource for Vertex AI.
Tensorboard
Managed tensorboard resource for Vertex AI.
TensorboardExperiment
Managed tensorboard resource for Vertex AI.
TensorboardRun
Managed tensorboard resource for Vertex AI.
TensorboardTimeSeries
Managed tensorboard resource for Vertex AI.
TextDataset
Managed text dataset resource for Vertex AI.
TimeSeriesDataset
Managed time series dataset resource for Vertex AI
VideoDataset
Managed video dataset resource for Vertex AI.
Packages Functions
end_run
end_run(state: google.cloud.aiplatform_v1.types.execution.Execution.State =
Ends the the current experiment run.
aiplatform.start_run('my-run')
...
aiplatform.end_run()
get_experiment_df
get_experiment_df(experiment: Optional[str] = None)
Returns a Pandas DataFrame of the parameters and metrics associated with one experiment.
Example:
aiplatform.init(experiment='exp-1') aiplatform.start_run(run='run-1') aiplatform.log_params({'learning_rate': 0.1}) aiplatform.log_metrics({'accuracy': 0.9})
aiplatform.start_run(run='run-2') aiplatform.log_params({'learning_rate': 0.2}) aiplatform.log_metrics({'accuracy': 0.95})
aiplatform.get_experiments_df()
Will result in the following DataFrame
| experiment_name | run_name | param.learning_rate | metric.accuracy |
| exp-1 | run-1 | 0.1 | 0.9 |
| exp-1 | run-2 | 0.2 | 0.95 |
get_pipeline_df
get_pipeline_df(pipeline: str)
Returns a Pandas DataFrame of the parameters and metrics associated with one pipeline.
init
init(
*,
project: Optional[str] = None,
location: Optional[str] = None,
experiment: Optional[str] = None,
experiment_description: Optional[str] = None,
experiment_tensorboard: Optional[
Union[google.cloud.aiplatform.tensorboard.tensorboard_resource.Tensorboard, str]
] = None,
staging_bucket: Optional[str] = None,
credentials: Optional[google.auth.credentials.Credentials] = None,
encryption_spec_key_name: Optional[str] = None
)
Updates common initialization parameters with provided options.
Name | Description |
project |
The default project to use when making API calls. |
location |
The default location to use when making API calls. If not set defaults to us-central-1. |
experiment |
Optional. The experiment name. |
experiment_description |
Optional. The description of the experiment. |
experiment_tensorboard |
Optional. The Vertex AI TensorBoard instance, Tensorboard resource name, or Tensorboard resource ID to use as a backing Tensorboard for the provided experiment. Example tensorboard resource name format: "projects/123/locations/us-central1/tensorboards/456" |
staging_bucket |
The default staging bucket to use to stage artifacts when making API calls. In the form gs://... |
credentials |
The default custom credentials to use when making API calls. If not provided credentials will be ascertained from the environment. |
encryption_spec_key_name |
Optional. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: |
log
log(
*, pipeline_job: Optional[google.cloud.aiplatform.pipeline_jobs.PipelineJob] = None
)
Log Vertex AI Resources to the current experiment run.
aiplatform.start_run('my-run')
my_job = aiplatform.PipelineJob(...)
my_job.submit()
aiplatform.log(my_job)
Name | Description |
pipeline_job |
Optional. Vertex PipelineJob to associate to this Experiment Run. |
log_metrics
log_metrics(metrics: Dict[str, Union[float, int, str]])
Log single or multiple Metrics with specified key and value pairs.
Metrics with the same key will be overwritten.
aiplatform.start_run('my-run', experiment='my-experiment')
aiplatform.log_metrics({'accuracy': 0.9, 'recall': 0.8})
Name | Description |
metrics |
Required. Metrics key/value pairs. |
log_params
log_params(params: Dict[str, Union[float, int, str]])
Log single or multiple parameters with specified key and value pairs.
Parameters with the same key will be overwritten.
aiplatform.start_run('my-run')
aiplatform.log_params({'learning_rate': 0.1, 'dropout_rate': 0.2})
Name | Description |
params |
Required. Parameter key/value pairs. |
log_time_series_metrics
log_time_series_metrics(
metrics: Dict[str, float],
step: Optional[int] = None,
wall_time: Optional[google.protobuf.timestamp_pb2.Timestamp] = None,
)
Logs time series metrics to to this Experiment Run.
Requires the experiment or experiment run has a backing Vertex Tensorboard resource.
my_tensorboard = aiplatform.Tensorboard(...)
aiplatform.init(experiment='my-experiment', experiment_tensorboard=my_tensorboard)
aiplatform.start_run('my-run')
# increments steps as logged
for i in range(10):
aiplatform.log_time_series_metrics({'loss': loss})
# explicitly log steps
for i in range(10):
aiplatform.log_time_series_metrics({'loss': loss}, step=i)
Name | Description |
metrics |
Required. Dictionary of where keys are metric names and values are metric values. |
step |
Optional. Step index of this data point within the run. If not provided, the latest step amongst all time series metrics already logged will be used. |
wall_time |
Optional. Wall clock timestamp when this data point is generated by the end user. If not provided, this will be generated based on the value from time.time() |
start_execution
start_execution(
*,
schema_title: Optional[str] = None,
display_name: Optional[str] = None,
resource_id: Optional[str] = None,
metadata: Optional[Dict[str, Any]] = None,
schema_version: Optional[str] = None,
description: Optional[str] = None,
resume: bool = False,
project: Optional[str] = None,
location: Optional[str] = None,
credentials: Optional[google.auth.credentials.Credentials] = None
)
Create and starts a new Metadata Execution or resumes a previously created Execution.
To start a new execution:
with aiplatform.start_execution(schema_title='system.ContainerExecution', display_name='trainer) as exc:
exc.assign_input_artifacts([my_artifact])
model = aiplatform.Artifact.create(uri='gs://my-uri', schema_title='system.Model')
exc.assign_output_artifacts([model])
To continue a previously created execution:
with aiplatform.start_execution(resource_id='my-exc', resume=True) as exc:
...
Name | Description |
schema_title |
Optional. schema_title identifies the schema title used by the Execution. Required if starting a new Execution. |
resource_id |
Optional. The <resource_id> portion of the Execution name with the format. This is globally unique in a metadataStore: projects/123/locations/us-central1/metadataStores/<metadata_store_id>/executions/<resource_id>. |
display_name |
Optional. The user-defined name of the Execution. |
schema_version |
Optional. schema_version specifies the version used by the Execution. If not set, defaults to use the latest version. |
metadata |
Optional. Contains the metadata information that will be stored in the Execution. |
description |
Optional. Describes the purpose of the Execution to be created. |
metadata_store_id |
Optional. The <metadata_store_id> portion of the resource name with the format: projects/123/locations/us-central1/metadataStores/<metadata_store_id>/artifacts/<resource_id> If not provided, the MetadataStore's ID will be set to "default". |
project |
Optional. Project used to create this Execution. Overrides project set in aiplatform.init. |
location |
Optional. Location used to create this Execution. Overrides location set in aiplatform.init. |
credentials |
Optional. Custom credentials used to create this Execution. Overrides credentials set in aiplatform.init. |
start_run
start_run(
run: str,
*,
tensorboard: Optional[
Union[google.cloud.aiplatform.tensorboard.tensorboard_resource.Tensorboard, str]
] = None,
resume=False
)
Start a run to current session.
aiplatform.init(experiment='my-experiment')
aiplatform.start_run('my-run')
aiplatform.log_params({'learning_rate':0.1})
Use as context manager. Run will be ended on context exit:
aiplatform.init(experiment='my-experiment')
with aiplatform.start_run('my-run') as my_run:
my_run.log_params({'learning_rate':0.1})
Resume a previously started run:
aiplatform.init(experiment='my-experiment')
with aiplatform.start_run('my-run') as my_run:
my_run.log_params({'learning_rate':0.1})
Name | Description |
run |
Required. Name of the run to assign current session with. |
resume |
Whether to resume this run. If False a new run will be created. |