Compare and analyze runs

Stay organized with collections Save and categorize content based on your preferences.

Use the Vertex AI SDK for Python to compare runs. The Google Cloud console provides an easy way to compare these runs.

Compare runs

Compare runs

Python

def get_experiments_data_frame_sample(
    experiment: str,
    project: str,
    location: str,
):
    aiplatform.init(experiment=experiment, project=project, location=location)

    experiments_df = aiplatform.get_experiment_df()

    return experiments_df

  • experiment_name: Provide a name for the experiment. You can find your list of experiments in the Google Cloud console by selecting Experiments in the section nav.
  • project: Your project ID. You can find these IDs in the Google Cloud console welcome page.
  • location: See List of available locations.

Get runs data

These samples involve getting run metrics, run parameters, runtime series metrics, artifacts, and experiment run classification.

Summary metrics

Python

def get_experiment_run_metrics_sample(
    run_name: str,
    experiment: Union[str, aiplatform.Experiment],
    project: str,
    location: str,
) -> Dict[str, Union[float, int]]:
    experiment_run = aiplatform.ExperimentRun(
        run_name=run_name, experiment=experiment, project=project, location=location
    )

    return experiment_run.get_metrics()

  • run_name: Specify the appropriate run name for this session.
  • experiment: The name or instance of this experiment. You can find your list of experiments in the Google Cloud console by selecting Experiments in the section nav.
  • project: Your project ID. You can find these in the Google Cloud console welcome page.
  • location: See List of available locations.

Parameters

Python

def get_experiment_run_params_sample(
    run_name: str,
    experiment: Union[str, aiplatform.Experiment],
    project: str,
    location: str,
) -> Dict[str, Union[float, int, str]]:
    experiment_run = aiplatform.ExperimentRun(
        run_name=run_name, experiment=experiment, project=project, location=location
    )

    return experiment_run.get_params()

  • run_name: Specify the appropriate run name for this session.
  • experiment: The name or instance of this experiment. You can find your list of experiments in the Google Cloud console by selecting Experiments in the section nav.
  • project: Your project ID. You can find these in the Google Cloud console welcome page.
  • location: See List of available locations.

Time series metrics

Python

def get_experiment_run_time_series_metric_data_frame_sample(
    run_name: str,
    experiment: Union[str, aiplatform.Experiment],
    project: str,
    location: str,
) -> "pd.DataFrame":  # noqa: F821
    experiment_run = aiplatform.ExperimentRun(
        run_name=run_name, experiment=experiment, project=project, location=location
    )

    return experiment_run.get_time_series_data_frame()

  • run_name: Specify the appropriate run name for this session.
  • experiment: The name or instance of this experiment. You can find your list of experiments in the Google Cloud console by selecting Experiments in the section nav.
  • project: Your project ID. You can find these in the Google Cloud console welcome page.
  • location: See List of available locations.

Artifacts

Python

def get_experiment_run_artifacts_sample(
    run_name: str,
    experiment: Union[str, aiplatform.Experiment],
    project: str,
    location: str,
) -> List[artifact.Artifact]:
    experiment_run = aiplatform.ExperimentRun(
        run_name=run_name,
        experiment=experiment,
        project=project,
        location=location,
    )

    return experiment_run.get_artifacts()
  • run_name: Specify the appropriate run name for this session.
  • experiment: The name or instance of this experiment. You can find your list of experiments in the Google Cloud console by selecting Experiments in the section nav.
  • project: Your project ID. You can find these in the Google Cloud console welcome page.
  • location: See List of available locations.

Classification metrics

Python

def get_experiment_run_classification_metrics_sample(
    run_name: str,
    experiment: Union[str, aiplatform.Experiment],
    project: str,
    location: str,
) -> List[Dict[str, Union[str, List]]]:
    experiment_run = aiplatform.ExperimentRun(
        run_name=run_name, experiment=experiment, project=project, location=location
    )

    return experiment_run.get_classification_metrics()

  • run_name: Specify the appropriate run name for this session.
  • experiment: The name or instance of this experiment. You can find your list of experiments in the Google Cloud console by selecting Experiments in the section nav.
  • project: Your project ID. You can find these in the Google Cloud console welcome page.
  • location: See List of available locations.

Google Cloud console

View experiment run details

Use the Google Cloud console to view details of your experiment runs

  1. In the Google Cloud console, go to the Experiments page.
    Go to Experiments.
    A list of experiments appears in the Experiments page.
  2. Select the experiment containing the run that you want to check.
    A list of runs appears.
  3. Select the desired run.
    The page containing the run's basic details, and associated parameters, metrics, and artifacts information appears.
Vertex AI compare and analyze runs

View artifact lineage

To view an artifact's lineage follow the steps for "View experiment run details". From the run details page, select the Artifact ID to see the artifact's lineage.

Vertex AI view artifact lineage

What's next