- 1.71.1 (latest)
- 1.71.0
- 1.70.0
- 1.69.0
- 1.68.0
- 1.67.1
- 1.66.0
- 1.65.0
- 1.63.0
- 1.62.0
- 1.60.0
- 1.59.0
- 1.58.0
- 1.57.0
- 1.56.0
- 1.55.0
- 1.54.1
- 1.53.0
- 1.52.0
- 1.51.0
- 1.50.0
- 1.49.0
- 1.48.0
- 1.47.0
- 1.46.0
- 1.45.0
- 1.44.0
- 1.43.0
- 1.39.0
- 1.38.1
- 1.37.0
- 1.36.4
- 1.35.0
- 1.34.0
- 1.33.1
- 1.32.0
- 1.31.1
- 1.30.1
- 1.29.0
- 1.28.1
- 1.27.1
- 1.26.1
- 1.25.0
- 1.24.1
- 1.23.0
- 1.22.1
- 1.21.0
- 1.20.0
- 1.19.1
- 1.18.3
- 1.17.1
- 1.16.1
- 1.15.1
- 1.14.0
- 1.13.1
- 1.12.1
- 1.11.0
- 1.10.0
- 1.9.0
- 1.8.1
- 1.7.1
- 1.6.2
- 1.5.0
- 1.4.3
- 1.3.0
- 1.2.0
- 1.1.1
- 1.0.1
- 0.9.0
- 0.8.0
- 0.7.1
- 0.6.0
- 0.5.1
- 0.4.0
- 0.3.1
API documentation for preview
package.
Classes
VertexModel
mixin class that can be used to add Vertex AI remote execution to a custom model.
Modules
generative_models
Classes for working with the Gemini models.
language_models
Classes for working with language models.
vision_models
Classes for working with vision models.
Packages Functions
end_run
end_run(
state: google.cloud.aiplatform_v1.types.execution.Execution.State = State.COMPLETE,
)
Ends the the current experiment run.
aiplatform.start_run('my-run')
...
aiplatform.end_run()
from_pretrained
from_pretrained(
*,
model_name: typing.Optional[str] = None,
custom_job_name: typing.Optional[str] = None,
foundation_model_name: typing.Optional[str] = None
) -> typing.Union[sklearn.base.BaseEstimator, tf.Module, torch.nn.Module]
Pulls a model from Model Registry or from a CustomJob ID for retraining.
The returned model is wrapped with a Vertex wrapper for running remote jobs on Vertex, unless an unwrapped model was registered to Model Registry.
get_experiment_df
get_experiment_df(experiment: typing.Optional[str] = None) -> pd.DataFrame
Returns a Pandas DataFrame of the parameters and metrics associated with one experiment.
Example:
aiplatform.init(experiment='exp-1')
aiplatform.start_run(run='run-1')
aiplatform.log_params({'learning_rate': 0.1})
aiplatform.log_metrics({'accuracy': 0.9})
aiplatform.start_run(run='run-2')
aiplatform.log_params({'learning_rate': 0.2})
aiplatform.log_metrics({'accuracy': 0.95})
aiplatform.get_experiments_df()
Will result in the following DataFrame:
experiment_name | run_name | param.learning_rate | metric.accuracy
exp-1 | run-1 | 0.1 | 0.9
exp-1 | run-2 | 0.2 | 0.95
Parameter | |
---|---|
Name | Description |
experiment |
Name of the Experiment to filter results. If not set, return results of current active experiment. |
init
init(
*,
remote: typing.Optional[bool] = None,
autolog: typing.Optional[bool] = None,
cluster: typing.Optional[
vertexai.preview._workflow.shared.configs.PersistentResourceConfig
] = None
)
Updates preview global parameters for Vertex remote execution.
Parameters | |
---|---|
Name | Description |
remote |
Optional. A global flag to indicate whether or not a method will be executed remotely. Default is Flase. The method level remote flag has higher priority than this global flag. |
autolog |
Optional. Whether or not to turn on autologging feature for remote execution. To learn more about the autologging feature, see https://cloud.google.com/vertex-ai/docs/experiments/autolog-data. |
cluster |
Optional. If passed, check if the cluster exists. If not, create a default one (single node, "n1-standard-4", no GPU) with the given name. Then use the cluster to run CustomJobs. Default is None. Example usage: from vertexai.preview.shared.configs import PersistentResourceConfig cluster = PersistentResourceConfig( name="my-cluster-1", resource_pools=[ ResourcePool(replica_count=1,), ResourcePool( machine_type="n1-standard-8", replica_count=2, accelerator_type="NVIDIA_TESLA_P100", accelerator_count=1, ), ] ) |
log_classification_metrics
log_classification_metrics(
*,
labels: typing.Optional[typing.List[str]] = None,
matrix: typing.Optional[typing.List[typing.List[int]]] = None,
fpr: typing.Optional[typing.List[float]] = None,
tpr: typing.Optional[typing.List[float]] = None,
threshold: typing.Optional[typing.List[float]] = None,
display_name: typing.Optional[str] = None
) -> (
google.cloud.aiplatform.metadata.schema.google.artifact_schema.ClassificationMetrics
)
Create an artifact for classification metrics and log to ExperimentRun. Currently support confusion matrix and ROC curve.
my_run = aiplatform.ExperimentRun('my-run', experiment='my-experiment')
classification_metrics = my_run.log_classification_metrics(
display_name='my-classification-metrics',
labels=['cat', 'dog'],
matrix=[[9, 1], [1, 9]],
fpr=[0.1, 0.5, 0.9],
tpr=[0.1, 0.7, 0.9],
threshold=[0.9, 0.5, 0.1],
)
Parameters | |
---|---|
Name | Description |
labels |
Optional. List of label names for the confusion matrix. Must be set if 'matrix' is set. |
matrix |
Optional. Values for the confusion matrix. Must be set if 'labels' is set. |
fpr |
Optional. List of false positive rates for the ROC curve. Must be set if 'tpr' or 'thresholds' is set. |
tpr |
Optional. List of true positive rates for the ROC curve. Must be set if 'fpr' or 'thresholds' is set. |
threshold |
Optional. List of thresholds for the ROC curve. Must be set if 'fpr' or 'tpr' is set. |
display_name |
Optional. The user-defined name for the classification metric artifact. |
log_metrics
log_metrics(metrics: typing.Dict[str, typing.Union[float, int, str]])
Log single or multiple Metrics with specified key and value pairs.
Metrics with the same key will be overwritten.
aiplatform.start_run('my-run', experiment='my-experiment')
aiplatform.log_metrics({'accuracy': 0.9, 'recall': 0.8})
Parameter | |
---|---|
Name | Description |
metrics |
Required. Metrics key/value pairs. |
log_params
log_params(params: typing.Dict[str, typing.Union[float, int, str]])
Log single or multiple parameters with specified key and value pairs.
Parameters with the same key will be overwritten.
aiplatform.start_run('my-run')
aiplatform.log_params({'learning_rate': 0.1, 'dropout_rate': 0.2})
Parameter | |
---|---|
Name | Description |
params |
Required. Parameter key/value pairs. |
log_time_series_metrics
log_time_series_metrics(
metrics: typing.Dict[str, float],
step: typing.Optional[int] = None,
wall_time: typing.Optional[google.protobuf.timestamp_pb2.Timestamp] = None,
)
Logs time series metrics to to this Experiment Run.
Requires the experiment or experiment run has a backing Vertex Tensorboard resource.
my_tensorboard = aiplatform.Tensorboard(...)
aiplatform.init(experiment='my-experiment', experiment_tensorboard=my_tensorboard)
aiplatform.start_run('my-run')
# increments steps as logged
for i in range(10):
aiplatform.log_time_series_metrics({'loss': loss})
# explicitly log steps
for i in range(10):
aiplatform.log_time_series_metrics({'loss': loss}, step=i)
Parameters | |
---|---|
Name | Description |
metrics |
Required. Dictionary of where keys are metric names and values are metric values. |
step |
Optional. Step index of this data point within the run. If not provided, the latest step amongst all time series metrics already logged will be used. |
wall_time |
Optional. Wall clock timestamp when this data point is generated by the end user. If not provided, this will be generated based on the value from time.time() |
register
register(
model: typing.Union[sklearn.base.BaseEstimator, tf.Module, torch.nn.Module],
use_gpu: bool = False,
) -> google.cloud.aiplatform.models.Model
Registers a model and returns a Model representing the registered Model resource.
Parameters | |
---|---|
Name | Description |
model |
Required. An OSS model. Supported frameworks: sklearn, tensorflow, pytorch. |
use_gpu |
Optional. Whether to use GPU for model serving. Default to False. |
remote
remote(cls_or_method: typing.Any) -> typing.Any
Takes a class or method and add Vertex remote execution support.
ex:
LogisticRegression = vertexai.preview.remote(LogisticRegression)
model = LogisticRegression()
model.fit.vertex.remote_config.staging_bucket = REMOTE_JOB_BUCKET
model.fit.vertex.remote=True
model.fit(X_train, y_train)
Parameter | |
---|---|
Name | Description |
cls_or_method |
Required. A class or method that will be added Vertex remote execution support. |
start_run
start_run(
run: str,
*,
tensorboard: typing.Optional[
typing.Union[
google.cloud.aiplatform.tensorboard.tensorboard_resource.Tensorboard, str
]
] = None,
resume=False
) -> google.cloud.aiplatform.metadata.experiment_run_resource.ExperimentRun
Start a run to current session.
aiplatform.init(experiment='my-experiment')
aiplatform.start_run('my-run')
aiplatform.log_params({'learning_rate':0.1})
Use as context manager. Run will be ended on context exit:
aiplatform.init(experiment='my-experiment')
with aiplatform.start_run('my-run') as my_run:
my_run.log_params({'learning_rate':0.1})
Resume a previously started run:
aiplatform.init(experiment='my-experiment')
with aiplatform.start_run('my-run', resume=True) as my_run:
my_run.log_params({'learning_rate':0.1})
Parameters | |
---|---|
Name | Description |
run |
Required. Name of the run to assign current session with. |
resume |
Whether to resume this run. If False a new run will be created. |