- 1.75.0 (latest)
- 1.74.0
- 1.73.0
- 1.72.0
- 1.71.1
- 1.70.0
- 1.69.0
- 1.68.0
- 1.67.1
- 1.66.0
- 1.65.0
- 1.63.0
- 1.62.0
- 1.60.0
- 1.59.0
- 1.58.0
- 1.57.0
- 1.56.0
- 1.55.0
- 1.54.1
- 1.53.0
- 1.52.0
- 1.51.0
- 1.50.0
- 1.49.0
- 1.48.0
- 1.47.0
- 1.46.0
- 1.45.0
- 1.44.0
- 1.43.0
- 1.39.0
- 1.38.1
- 1.37.0
- 1.36.4
- 1.35.0
- 1.34.0
- 1.33.1
- 1.32.0
- 1.31.1
- 1.30.1
- 1.29.0
- 1.28.1
- 1.27.1
- 1.26.1
- 1.25.0
- 1.24.1
- 1.23.0
- 1.22.1
- 1.21.0
- 1.20.0
- 1.19.1
- 1.18.3
- 1.17.1
- 1.16.1
- 1.15.1
- 1.14.0
- 1.13.1
- 1.12.1
- 1.11.0
- 1.10.0
- 1.9.0
- 1.8.1
- 1.7.1
- 1.6.2
- 1.5.0
- 1.4.3
- 1.3.0
- 1.2.0
- 1.1.1
- 1.0.1
- 0.9.0
- 0.8.0
- 0.7.1
- 0.6.0
- 0.5.1
- 0.4.0
- 0.3.1
ExperimentRun(
run_name: str,
experiment: Union[
google.cloud.aiplatform.metadata.experiment_resources.Experiment, str
],
*,
project: Optional[str] = None,
location: Optional[str] = None,
credentials: Optional[google.auth.credentials.Credentials] = None
)
A Vertex AI Experiment run.
Inheritance
builtins.object > abc.ABC > google.cloud.aiplatform.metadata.experiment_resources._ExperimentLoggable > ExperimentRunProperties
credentials
The credentials used to access this experiment run.
location
The location that this experiment is located in.
name
This run's name used to identify this run within it's Experiment.
project
The project that this experiment run is located in.
resource_id
The resource ID of this experiment run's Metadata context.
The resource ID is the final part of the resource name:
projects/{project}/locations/{location}/metadataStores/{metadatastore}/contexts/{resource ID}
resource_name
This run's Metadata context resource name.
In the format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/contexts/{context}
state
The state of this run.
Methods
ExperimentRun
ExperimentRun(
run_name: str,
experiment: Union[
google.cloud.aiplatform.metadata.experiment_resources.Experiment, str
],
*,
project: Optional[str] = None,
location: Optional[str] = None,
credentials: Optional[google.auth.credentials.Credentials] = None
)
my_run = aiplatform.ExperimentRun('my-run', experiment='my-experiment')
Parameters | |
---|---|
Name | Description |
run_name |
str
Required. The name of this run. |
experiment |
Union[experiment_resources.Experiment, str]
Required. The name or instance of this experiment. |
project |
str
Optional. Project where this experiment run is located. Overrides project set in aiplatform.init. |
location |
str
Optional. Location where this experiment run is located. Overrides location set in aiplatform.init. |
credentials |
auth_credentials.Credentials
Optional. Custom credentials used to retrieve this experiment run. Overrides credentials set in aiplatform.init. |
__init_subclass__
__init_subclass__(
*,
experiment_loggable_schemas: Tuple[
google.cloud.aiplatform.metadata.experiment_resources._ExperimentLoggableSchema
],
**kwargs
)
Register the metadata_schema for the subclass so Experiment can use it to retrieve the associated types.
usage:
class PipelineJob(..., experiment_loggable_schemas= (_ExperimentLoggableSchema(title='system.PipelineRun'), )
assign_backing_tensorboard
assign_backing_tensorboard(
tensorboard: Union[
google.cloud.aiplatform.tensorboard.tensorboard_resource.Tensorboard, str
]
)
Assigns tensorboard as backing tensorboard to support timeseries metrics logging for this run.
Parameter | |
---|---|
Name | Description |
tensorboard |
Union[aiplatform.Tensorboard, str]
Required. Tensorboard instance or resource name. |
associate_execution
associate_execution(
execution: google.cloud.aiplatform.metadata.execution.Execution,
)
Associate an execution to this experiment run.
Parameter | |
---|---|
Name | Description |
execution |
aiplatform.Execution
Execution to associate to this run. |
create
create(run_name: str, *, experiment: Optional[Union[google.cloud.aiplatform.metadata.experiment_resources.Experiment, str]] = None, tensorboard: Optional[Union[google.cloud.aiplatform.tensorboard.tensorboard_resource.Tensorboard, str]] = None, state: google.cloud.aiplatform_v1.types.execution.Execution.State = <State.RUNNING: 2>, project: Optional[str] = None, location: Optional[str] = None, credentials: Optional[google.auth.credentials.Credentials] = None)
Creates a new experiment run in Vertex AI Experiments.
my_run = aiplatform.ExperimentRun.create('my-run', experiment='my-experiment')
Parameters | |
---|---|
Name | Description |
run_name |
str
Required. The name of this run. |
experiment |
Union[aiplatform.Experiment, str]
Optional. The name or instance of the experiment to create this run under. If not provided, will default to the experiment set in |
tensorboard |
Union[aiplatform.Tensorboard, str]
Optional. The resource name or instance of Vertex Tensorbaord to use as the backing Tensorboard for time series metric logging. If not provided, will default to the the backing tensorboard of parent experiment if set. Must be in same project and location as this experiment run. |
state |
aiplatform.gapic.Execution.State
Optional. The state of this run. Defaults to RUNNING. |
project |
str
Optional. Project where this experiment will be created. Overrides project set in aiplatform.init. |
location |
str
Optional. Location where this experiment will be created. Overrides location set in aiplatform.init. |
credentials |
auth_credentials.Credentials
Optional. Custom credentials used to create this experiment. Overrides credentials set in aiplatform.init. |
delete
delete(*, delete_backing_tensorboard_run: bool = False)
Deletes this experiment run.
Does not delete the executions, artifacts, or resources logged to this run.
Parameter | |
---|---|
Name | Description |
delete_backing_tensorboard_run |
bool
Optional. Whether to delete the backing tensorboard run that stores time series metrics for this run. |
end_run
end_run(*, state: google.cloud.aiplatform_v1.types.execution.Execution.State = <State.COMPLETE: 3>)
Ends this experiment run and sets state to COMPLETE.
Parameter | |
---|---|
Name | Description |
state |
aiplatform.gapic.Execution.State
Optional. Override the state at the end of run. Defaults to COMPLETE. |
get
get(
run_name: str,
*,
experiment: Optional[
Union[google.cloud.aiplatform.metadata.experiment_resources.Experiment, str]
] = None,
project: Optional[str] = None,
location: Optional[str] = None,
credentials: Optional[google.auth.credentials.Credentials] = None
)
Gets experiment run if one exists with this run_name.
Parameters | |
---|---|
Name | Description |
run_name |
str
Required. The name of this run. |
experiment |
Union[experiment_resources.Experiment, str]
Optional. The name or instance of this experiment. If not set, use the default experiment in |
project |
str
Optional. Project where this experiment run is located. Overrides project set in aiplatform.init. |
location |
str
Optional. Location where this experiment run is located. Overrides location set in aiplatform.init. |
credentials |
auth_credentials.Credentials
Optional. Custom credentials used to retrieve this experiment run. Overrides credentials set in aiplatform.init. |
get_artifacts
get_artifacts()
Get the list of artifacts associated to this run.
get_classification_metrics
get_classification_metrics()
Get all the classification metrics logged to this run.
my_run = aiplatform.ExperimentRun('my-run', experiment='my-experiment')
metric = my_run.get_classification_metrics()[0]
print(metric)
## print result:
{
"id": "e6c893a4-222e-4c60-a028-6a3b95dfc109",
"display_name": "my-classification-metrics",
"labels": ["cat", "dog"],
"matrix": [[9,1], [1,9]],
"fpr": [0.1, 0.5, 0.9],
"tpr": [0.1, 0.7, 0.9],
"thresholds": [0.9, 0.5, 0.1]
}
get_executions
get_executions()
Get the List of Executions associated to this run
get_experiment_models
get_experiment_models()
Get all ExperimentModel associated to this experiment run.
get_logged_custom_jobs
get_logged_custom_jobs()
Get all CustomJobs associated to this experiment run.
get_logged_pipeline_jobs
get_logged_pipeline_jobs()
Get all PipelineJobs associated to this experiment run.
get_metrics
get_metrics()
Get the summary metrics logged to this run.
get_params
get_params()
Get the parameters logged to this run.
get_time_series_data_frame
get_time_series_data_frame()
Returns all time series in this Run as a DataFrame.
Returns | |
---|---|
Type | Description |
pd.DataFrame |
Time series metrics in this Run as a Dataframe. |
list
list(
*,
experiment: Optional[
Union[google.cloud.aiplatform.metadata.experiment_resources.Experiment, str]
] = None,
project: Optional[str] = None,
location: Optional[str] = None,
credentials: Optional[google.auth.credentials.Credentials] = None
)
List the experiment runs for a given aiplatform.Experiment.
my_runs = aiplatform.ExperimentRun.list(experiment='my-experiment')
Parameters | |
---|---|
Name | Description |
experiment |
Union[aiplatform.Experiment, str]
Optional. The experiment name or instance to list the experiment run from. If not provided, will use the experiment set in aiplatform.init. |
project |
str
Optional. Project where this experiment is located. Overrides project set in aiplatform.init. |
location |
str
Optional. Location where this experiment is located. Overrides location set in aiplatform.init. |
credentials |
auth_credentials.Credentials
Optional. Custom credentials used to retrieve this experiment. Overrides credentials set in aiplatform.init. |
log
log(
*, pipeline_job: Optional[google.cloud.aiplatform.pipeline_jobs.PipelineJob] = None
)
Log a Vertex Resource to this experiment run.
my_run = aiplatform.ExperimentRun('my-run', experiment='my-experiment')
my_job = aiplatform.PipelineJob(...)
my_job.submit()
my_run.log(my_job)
Parameter | |
---|---|
Name | Description |
pipeline_job |
aiplatform.PipelineJob
Optional. A Vertex PipelineJob. |
log_classification_metrics
log_classification_metrics(
*,
labels: Optional[List[str]] = None,
matrix: Optional[List[List[int]]] = None,
fpr: Optional[List[float]] = None,
tpr: Optional[List[float]] = None,
threshold: Optional[List[float]] = None,
display_name: Optional[str] = None
)
Create an artifact for classification metrics and log to ExperimentRun. Currently supports confusion matrix and ROC curve.
my_run = aiplatform.ExperimentRun('my-run', experiment='my-experiment')
classification_metrics = my_run.log_classification_metrics(
display_name='my-classification-metrics',
labels=['cat', 'dog'],
matrix=[[9, 1], [1, 9]],
fpr=[0.1, 0.5, 0.9],
tpr=[0.1, 0.7, 0.9],
threshold=[0.9, 0.5, 0.1],
)
Parameters | |
---|---|
Name | Description |
labels |
List[str]
Optional. List of label names for the confusion matrix. Must be set if 'matrix' is set. |
matrix |
List[List[int]
Optional. Values for the confusion matrix. Must be set if 'labels' is set. |
fpr |
List[float]
Optional. List of false positive rates for the ROC curve. Must be set if 'tpr' or 'thresholds' is set. |
tpr |
List[float]
Optional. List of true positive rates for the ROC curve. Must be set if 'fpr' or 'thresholds' is set. |
threshold |
List[float]
Optional. List of thresholds for the ROC curve. Must be set if 'fpr' or 'tpr' is set. |
display_name |
str
Optional. The user-defined name for the classification metric artifact. |
Exceptions | |
---|---|
Type | Description |
ValueError |
if 'labels' and 'matrix' are not set together or if 'labels' and 'matrix' are not in the same length or if 'fpr' and 'tpr' and 'threshold' are not set together or if 'fpr' and 'tpr' and 'threshold' are not in the same length |
log_metrics
log_metrics(metrics: Dict[str, Union[float, int, str]])
Log single or multiple Metrics with specified key and value pairs.
Metrics with the same key will be overwritten.
my_run = aiplatform.ExperimentRun('my-run', experiment='my-experiment')
my_run.log_metrics({'accuracy': 0.9, 'recall': 0.8})
Parameter | |
---|---|
Name | Description |
metrics |
Dict[str, Union[float, int, str]]
Required. Metrics key/value pairs. |
Exceptions | |
---|---|
Type | Description |
TypeError |
If keys are not str or values are not float, int, or str. |
log_model
log_model(
model: Union[sklearn.base.BaseEstimator, xgb.Booster, tf.Module],
artifact_id: Optional[str] = None,
*,
uri: Optional[str] = None,
input_example: Union[list, dict, pd.DataFrame, np.ndarray] = None,
display_name: Optional[str] = None,
metadata_store_id: Optional[str] = "default",
project: Optional[str] = None,
location: Optional[str] = None,
credentials: Optional[google.auth.credentials.Credentials] = None
)
Saves a ML model into a MLMD artifact and log it to this ExperimentRun.
Supported model frameworks: sklearn, xgboost, tensorflow.
Example usage: model = LinearRegression() model.fit(X, y) aiplatform.init( project="my-project", location="my-location", staging_bucket="gs://my-bucket", experiment="my-exp" ) with aiplatform.start_run("my-run"): aiplatform.log_model(model, "my-sklearn-model")
Parameters | |
---|---|
Name | Description |
model |
Union["sklearn.base.BaseEstimator", "xgb.Booster", "tf.Module"]
Required. A machine learning model. |
artifact_id |
str
Optional. The resource id of the artifact. This id must be globally unique in a metadataStore. It may be up to 63 characters, and valid characters are |
uri |
str
Optional. A gcs directory to save the model file. If not provided, |
input_example |
Union[list, dict, pd.DataFrame, np.ndarray]
Optional. An example of a valid model input. Will be stored as a yaml file in the gcs uri. Accepts list, dict, pd.DataFrame, and np.ndarray The value inside a list must be a scalar or list. The value inside a dict must be a scalar, list, or np.ndarray. |
display_name |
str
Optional. The display name of the artifact. |
metadata_store_id |
str
Optional. The <metadata_store_id> portion of the resource name with the format: projects/123/locations/us-central1/metadataStores/<metadata_store_id>/artifacts/<resource_id> If not provided, the MetadataStore's ID will be set to "default". |
project |
str
Optional. Project used to create this Artifact. Overrides project set in aiplatform.init. |
location |
str
Optional. Location used to create this Artifact. Overrides location set in aiplatform.init. |
credentials |
auth_credentials.Credentials
Optional. Custom credentials used to create this Artifact. Overrides credentials set in aiplatform.init. |
Exceptions | |
---|---|
Type | Description |
ValueError |
if model type is not supported. |
log_params
log_params(params: Dict[str, Union[float, int, str]])
Log single or multiple parameters with specified key value pairs.
Parameters with the same key will be overwritten.
my_run = aiplatform.ExperimentRun('my-run', experiment='my-experiment')
my_run.log_params({'learning_rate': 0.1, 'dropout_rate': 0.2})
Parameter | |
---|---|
Name | Description |
params |
Dict[str, Union[float, int, str]]
Required. Parameter key/value pairs. |
Exceptions | |
---|---|
Type | Description |
TypeError |
If key is not str or value is not float, int, str. |
log_time_series_metrics
log_time_series_metrics(
metrics: Dict[str, float],
step: Optional[int] = None,
wall_time: Optional[google.protobuf.timestamp_pb2.Timestamp] = None,
)
Logs time series metrics to backing TensorboardRun of this Experiment Run.
run.log_time_series_metrics({'accuracy': 0.9}, step=10)
Parameters | |
---|---|
Name | Description |
metrics |
Dict[str, Union[str, float]]
Required. Dictionary of where keys are metric names and values are metric values. |
step |
int
Optional. Step index of this data point within the run. If not provided, the latest step amongst all time series metrics already logged will be used. |
wall_time |
timestamp_pb2.Timestamp
Optional. Wall clock timestamp when this data point is generated by the end user. If not provided, this will be generated based on the value from time.time() |
Exceptions | |
---|---|
Type | Description |
RuntimeError |
If current experiment run doesn't have a backing Tensorboard resource. |
update_state
update_state(state: google.cloud.aiplatform_v1.types.execution.Execution.State)
Update the state of this experiment run.
my_run = aiplatform.ExperimentRun('my-run', experiment='my-experiment')
my_run.update_state(state=aiplatform.gapic.Execution.State.COMPLETE)
Parameter | |
---|---|
Name | Description |
state |
aiplatform.gapic.Execution.State
State of this run. |