Compare and analyze runs

Use the Vertex AI SDK for Python to compare runs. The console provides an easy way to compare these runs.

API

Compare runs

Python

def get_experiments_data_frame_sample(
    experiment_name: str,
    project: str,
    location: str,
):
    aiplatform.init(experiment_name=experiment_name, project=project, location=location)

    experiments_df = aiplatform.get_experiment_df()

    return experiments_df

  • experiment_name: Provide a name for the experiment. You can find your list of experiments in the console by selecting Experiments in the section nav.
  • project: Your project ID. You can find these IDs in the console welcome page.
  • location: See List of available locations.

Get run metrics

Python

def get_experiment_run_metrics_sample(
    run_name: str,
    experiment: Union[str, aiplatform.Experiment],
    project: str,
    location: str,
) -> Dict[str, Union[float, int]]:
    experiment_run = aiplatform.ExperimentRun(
        run_name=run_name, experiment=experiment, project=project, location=location
    )

    return experiment_run.get_metrics()

  • run_name: Specify a run name to associate with your current session.
  • experiment: The name or instance of this experiment. You can find your list of experiments in the console by selecting Experiments in the section nav.
  • project: Your project ID. You can find these in the console welcome page.
  • location: See List of available locations.

Get run parameters

Python

def get_experiment_run_params_sample(
    run_name: str,
    experiment: Union[str, aiplatform.Experiment],
    project: str,
    location: str,
) -> Dict[str, Union[float, int, str]]:
    experiment_run = aiplatform.ExperimentRun(
        run_name=run_name, experiment=experiment, project=project, location=location
    )

    return experiment_run.get_params()

  • run_name: Specify a run name to associate with your current session.
  • experiment: The name or instance of this experiment. You can find your list of experiments in the console by selecting Experiments in the section nav.
  • project: Your project ID. You can find these in the console welcome page.
  • location: See List of available locations.

Get run time series metrics

Python

def get_experiment_run_time_series_metric_data_frame_sample(
    run_name: str,
    experiment: Union[str, aiplatform.Experiment],
    project: str,
    location: str,
) -> "pd.DataFrame":  # noqa: F821
    experiment_run = aiplatform.ExperimentRun(
        run_name=run_name, experiment=experiment, project=project, location=location
    )

    return experiment_run.get_time_series_data_frame()

  • run_name: Specify a run name to associate with your current session.
  • experiment: The name or instance of this experiment. You can find your list of experiments in the console by selecting Experiments in the section nav.
  • project: Your project ID. You can find these in the console welcome page.
  • location: See List of available locations.

Get artifact

Python

def get_artifact_sample(
    artifact_id: str,
    project: str,
    location: str,
):
    artifact = aiplatform.Artifact.get(
        resource_id=artifact_id, project=project, location=location
    )

    return artifact

Google Cloud console

View experiment run details

Use the console to view details of your experiment runs

  1. In the console, go to the Experiments page.
    Go to Experiments.
    A list of experiments appears in the Experiments page.
  2. Select the experiment containing the run that you want to check.
    A list of runs appears.
  3. Select the desired run.
    The page containing the run's basic details, and associated parameters, metrics, and artifacts information appears.
Vertex AI compare and analyze runs

View artifact lineage

To view an artifact's lineage follow the steps for "View experiment run details". From the run details page, select the Artifact ID to see the artifact's lineage.

Vertex AI view artifact lineage

What's next