Class ExperimentRun (1.18.1)

ExperimentRun(
    run_name: str,
    experiment: Union[
        google.cloud.aiplatform.metadata.experiment_resources.Experiment, str
    ],
    *,
    project: Optional[str] = None,
    location: Optional[str] = None,
    credentials: Optional[google.auth.credentials.Credentials] = None
)

A Vertex AI Experiment run

Inheritance

builtins.object > abc.ABC > google.cloud.aiplatform.metadata.experiment_resources._ExperimentLoggable > ExperimentRun

Properties

credentials

The credentials used to access this experiment run.

location

The location that this experiment is located in.

name

This run's name used to identify this run within it's Experiment.

project

The project that this experiment run is located in.

resource_id

The resource ID of this experiment run's Metadata context.

The resource ID is the final part of the resource name: projects/{project}/locations/{location}/metadataStores/{metadatastore}/contexts/{resource ID}

resource_name

This run's Metadata context resource name.

In the format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/contexts/{context}

state

The state of this run.

Methods

ExperimentRun

ExperimentRun(
    run_name: str,
    experiment: Union[
        google.cloud.aiplatform.metadata.experiment_resources.Experiment, str
    ],
    *,
    project: Optional[str] = None,
    location: Optional[str] = None,
    credentials: Optional[google.auth.credentials.Credentials] = None
)
my_run = aiplatform.ExperimentRun('my-run, experiment='my-experiment')
Parameters
NameDescription
run_name str

Required. The name of this run.

experiment Union[experiment_resources.Experiment, str]

Required. The name or instance of this experiment.

project str

Optional. Project where this experiment run is located. Overrides project set in aiplatform.init.

location str

Optional. Location where this experiment run is located. Overrides location set in aiplatform.init.

credentials auth_credentials.Credentials

Optional. Custom credentials used to retrieve this experiment run. Overrides credentials set in aiplatform.init.

__init_subclass__

__init_subclass__(
    *,
    experiment_loggable_schemas: Tuple[
        google.cloud.aiplatform.metadata.experiment_resources._ExperimentLoggableSchema
    ],
    **kwargs
)

Register the metadata_schema for the subclass so Experiment can use it to retrieve the associated types.

usage:

class PipelineJob(..., experiment_loggable_schemas= (_ExperimentLoggableSchema(title='system.PipelineRun'), )

assign_backing_tensorboard

assign_backing_tensorboard(
    tensorboard: Union[
        google.cloud.aiplatform.tensorboard.tensorboard_resource.Tensorboard, str
    ]
)

Assigns tensorboard as backing tensorboard to support timeseries metrics logging for this run.

Parameter
NameDescription
tensorboard Union[aiplatform.Tensorboard, str]

Required. Tensorboard instance or resource name.

associate_execution

associate_execution(
    execution: google.cloud.aiplatform.metadata.execution.Execution,
)

Associate an execution to this experiment run.

Parameter
NameDescription
execution aiplatform.Execution

Execution to associate to this run.

create

create(run_name: str, *, experiment: Optional[Union[str, google.cloud.aiplatform.metadata.experiment_resources.Experiment]] = None, tensorboard: Optional[Union[google.cloud.aiplatform.tensorboard.tensorboard_resource.Tensorboard, str]] = None, state: google.cloud.aiplatform_v1.types.execution.Execution.State = <State.RUNNING: 2>, project: Optional[str] = None, location: Optional[str] = None, credentials: Optional[google.auth.credentials.Credentials] = None)

Creates a new experiment run in Vertex AI Experiments.

my_run = aiplatform.ExperimentRun.create('my-run', experiment='my-experiment')
Parameters
NameDescription
run_name str

Required. The name of this run.

experiment Union[aiplatform.Experiment, str]

Optional. The name or instance of the experiment to create this run under. If not provided, will default to the experiment set in aiplatform.init.

tensorboard Union[aiplatform.Tensorboard, str]

Optional. The resource name or instance of Vertex Tensorbaord to use as the backing Tensorboard for time series metric logging. If not provided, will default to the the backing tensorboard of parent experiment if set. Must be in same project and location as this experiment run.

state aiplatform.gapic.Execution.State

Optional. The state of this run. Defaults to RUNNING.

project str

Optional. Project where this experiment will be created. Overrides project set in aiplatform.init.

location str

Optional. Location where this experiment will be created. Overrides location set in aiplatform.init.

credentials auth_credentials.Credentials

Optional. Custom credentials used to create this experiment. Overrides credentials set in aiplatform.init.

delete

delete(*, delete_backing_tensorboard_run: bool = False)

Deletes this experiment run.

Does not delete the executions, artifacts, or resources logged to this run.

Parameter
NameDescription
delete_backing_tensorboard_run bool

Optional. Whether to delete the backing tensorboard run that stores time series metrics for this run.

end_run

end_run(*, state: google.cloud.aiplatform_v1.types.execution.Execution.State = <State.COMPLETE: 3>)

Ends this experiment run and sets state to COMPLETE.

Parameter
NameDescription
state aiplatform.gapic.Execution.State

Optional. Override the state at the end of run. Defaults to COMPLETE.

get_artifacts

get_artifacts()

Get the list of artifacts associated to this run.

get_classification_metrics

get_classification_metrics()

Get all the classification metrics logged to this run.

my_run = aiplatform.ExperimentRun('my-run', experiment='my-experiment')
metric = my_run.get_classification_metrics()[0]
print(metric)
## print result:
    {
        "id": "e6c893a4-222e-4c60-a028-6a3b95dfc109",
        "display_name": "my-classification-metrics",
        "labels": ["cat", "dog"],
        "matrix": [[9,1], [1,9]],
        "fpr": [0.1, 0.5, 0.9],
        "tpr": [0.1, 0.7, 0.9],
        "thresholds": [0.9, 0.5, 0.1]
    }

get_executions

get_executions()

Get the List of Executions associated to this run

get_logged_pipeline_jobs

get_logged_pipeline_jobs()

Get all PipelineJobs associated to this experiment run.

get_metrics

get_metrics()

Get the summary metrics logged to this run.

get_params

get_params()

Get the parameters logged to this run.

get_time_series_data_frame

get_time_series_data_frame()

Returns all time series in this Run as a DataFrame.

Returns
TypeDescription
pd.DataFrameTime series metrics in this Run as a Dataframe.

list

list(
    *,
    experiment: Optional[
        Union[str, google.cloud.aiplatform.metadata.experiment_resources.Experiment]
    ] = None,
    project: Optional[str] = None,
    location: Optional[str] = None,
    credentials: Optional[google.auth.credentials.Credentials] = None
)

List the experiment runs for a given aiplatform.Experiment.

my_runs = aiplatform.ExperimentRun.list(experiment='my-experiment')
Parameters
NameDescription
experiment Union[aiplatform.Experiment, str]

Optional. The experiment name or instance to list the experiment run from. If not provided, will use the experiment set in aiplatform.init.

project str

Optional. Project where this experiment is located. Overrides project set in aiplatform.init.

location str

Optional. Location where this experiment is located. Overrides location set in aiplatform.init.

credentials auth_credentials.Credentials

Optional. Custom credentials used to retrieve this experiment. Overrides credentials set in aiplatform.init.

log

log(
    *, pipeline_job: Optional[google.cloud.aiplatform.pipeline_jobs.PipelineJob] = None
)

Log a Vertex Resource to this experiment run.

my_run = aiplatform.ExperimentRun('my-run', experiment='my-experiment')
my_job = aiplatform.PipelineJob(...)
my_job.submit()
my_run.log(my_job)
Parameter
NameDescription
pipeline_job aiplatform.PipelineJob

Optional. A Vertex PipelineJob.

log_classification_metrics

log_classification_metrics(
    *,
    labels: Optional[List[str]] = None,
    matrix: Optional[List[List[int]]] = None,
    fpr: Optional[List[float]] = None,
    tpr: Optional[List[float]] = None,
    threshold: Optional[List[float]] = None,
    display_name: Optional[str] = None
)

Create an artifact for classification metrics and log to ExperimentRun. Currently supports confusion matrix and ROC curve.

my_run = aiplatform.ExperimentRun('my-run', experiment='my-experiment')
my_run.log_classification_metrics(
    display_name='my-classification-metrics',
    labels=['cat', 'dog'],
    matrix=[[9, 1], [1, 9]],
    fpr=[0.1, 0.5, 0.9],
    tpr=[0.1, 0.7, 0.9],
    threshold=[0.9, 0.5, 0.1],
)
Parameters
NameDescription
labels List[str]

Optional. List of label names for the confusion matrix. Must be set if 'matrix' is set.

matrix List[List[int]

Optional. Values for the confusion matrix. Must be set if 'labels' is set.

fpr List[float]

Optional. List of false positive rates for the ROC curve. Must be set if 'tpr' or 'thresholds' is set.

tpr List[float]

Optional. List of true positive rates for the ROC curve. Must be set if 'fpr' or 'thresholds' is set.

threshold List[float]

Optional. List of thresholds for the ROC curve. Must be set if 'fpr' or 'tpr' is set.

display_name str

Optional. The user-defined name for the classification metric artifact.

Exceptions
TypeDescription
ValueErrorif 'labels' and 'matrix' are not set together or if 'labels' and 'matrix' are not in the same length or if 'fpr' and 'tpr' and 'threshold' are not set together or if 'fpr' and 'tpr' and 'threshold' are not in the same length

log_metrics

log_metrics(metrics: Dict[str, Union[float, int, str]])

Log single or multiple Metrics with specified key and value pairs.

Metrics with the same key will be overwritten.

my_run = aiplatform.ExperimentRun('my-run', experiment='my-experiment')
my_run.log_metrics({'accuracy': 0.9, 'recall': 0.8})
Parameter
NameDescription
metrics Dict[str, Union[float, int, str]]

Required. Metrics key/value pairs.

Exceptions
TypeDescription
TypeErrorIf keys are not str or values are not float, int, or str.

log_params

log_params(params: Dict[str, Union[float, int, str]])

Log single or multiple parameters with specified key value pairs.

Parameters with the same key will be overwritten.

my_run = aiplatform.ExperimentRun('my-run', experiment='my-experiment')
my_run.log_params({'learning_rate': 0.1, 'dropout_rate': 0.2})
Parameter
NameDescription
params Dict[str, Union[float, int, str]]

Required. Parameter key/value pairs.

Exceptions
TypeDescription
TypeErrorIf key is not str or value is not float, int, str.

log_time_series_metrics

log_time_series_metrics(
    metrics: Dict[str, float],
    step: Optional[int] = None,
    wall_time: Optional[google.protobuf.timestamp_pb2.Timestamp] = None,
)

Logs time series metrics to backing TensorboardRun of this Experiment Run.

run.log_time_series_metrics({'accuracy': 0.9}, step=10)
Parameters
NameDescription
metrics Dict[str, Union[str, float]]

Required. Dictionary of where keys are metric names and values are metric values.

step int

Optional. Step index of this data point within the run. If not provided, the latest step amongst all time series metrics already logged will be used.

wall_time timestamp_pb2.Timestamp

Optional. Wall clock timestamp when this data point is generated by the end user. If not provided, this will be generated based on the value from time.time()

Exceptions
TypeDescription
RuntimeErrorIf current experiment run doesn't have a backing Tensorboard resource.

update_state

update_state(state: google.cloud.aiplatform_v1.types.execution.Execution.State)

Update the state of this experiment run.

my_run = aiplatform.ExperimentRun('my-run', experiment='my-experiment')
my_run.update_state(state=aiplatform.gapic.Execution.State.COMPLETE)
Parameter
NameDescription
state aiplatform.gapic.Execution.State

State of this run.