Autolog data to an experiment run

When autologging is enabled in the Vertex AI SDK, parameters and metrics from model-training runs are automatically logged to Vertex AI Experiments. Currently, autologging only supports parameter and metric logging.

Autolog data

There are two options for autologging data to Vertex AI Experiments.

  • Let the Vertex AI SDK automatically create ExperimentRun resources for you.
  • Specify the ExperimentRun resource that you'd like autologged parameters and metrics to be written to

Auto-created

The Vertex AI SDK for Python handles creating ExperimentRun resources for you. Automatically created ExperimentRun resources will have a run name in the following format: {ml-framework-name}-{timestamp}-{uid}, for example: "tensorflow-2023-01-04-16-09-20-86a88".

Vertex AI SDK for Python

def autologging_with_auto_run_creation_sample(
    experiment_name: str,
    experiment_tensorboard: Union[str, aiplatform.Tensorboard],
    project: str,
    location: str,
):
    aiplatform.init(
        experiment_name=experiment_name,
        project=project,
        location=location,
        experiment_tensorboard=experiment_tensorboard,
    )

    aiplatform.autolog()

    # Your model training code goes here

    aiplatform.autolog(disable=True)

  • experiment_name: Provide a name for your experiment. You can find your list of experiments in the Google Cloud console by selecting Experiments in the section nav.
  • experiment_tensorboard: (Required) Provide a name for your Vertex AI TensorBoard instance. Providing an Experiment TensorBoard is required to use autologging since many model-training runs generate time series metrics. See Create a Vertex AI TensorBoard instance, and Pricing for Vertex AI TensorBoard.
  • project: Your project ID. You can find these Project IDs in the Google Cloud console welcome page.
  • location: See List of available locations

User-specified

Provide your own ExperimentRun names and have metrics and parameters from multiple model-training runs logged to the same ExperimentRun. Any metrics from model to the current run set by calling aiplatform.start_run("your-run-name") until aiplatform.end_run() is called.

Vertex AI SDK for Python

def autologging_with_manual_run_creation_sample(
    experiment_name: str,
    run_name: str,
    experiment_tensorboard: Union[str, aiplatform.Tensorboard],
    project: str,
    location: str,
):
    aiplatform.init(
        experiment_name=experiment_name,
        project=project,
        location=location,
        experiment_tensorboard=experiment_tensorboard,
    )

    aiplatform.autolog()

    aiplatform.start_run(run=run_name)

    # Your model training code goes here

    aiplatform.end_run()

    aiplatform.autolog(disable=True)

  • experiment_name: Provide the name of your experiment.
  • run_name: Provide a name for your experiment run. You can find your list of experiments in the Google Cloud console by selecting Experiments in the section nav.
  • project: Your project ID. You can find these Project IDs in the Google Cloud console welcome page.
  • location: See List of available locations
  • experiment_tensorboard: Provide a name for your Vertex AI TensorBoard instance. Providing an Experiment TensorBoard is required to use autologging since many model-training runs generate time series metrics. See Create a Vertex AI TensorBoard instance, and Pricing for Vertex AI TensorBoard.

Vertex AI SDK autologging uses MLFlow's autologging in its implementation. Evaluation metrics and parameters from the following frameworks are logged to your ExperimentRun when autologging is enabled.

  • Fastai
  • Gluon
  • Keras
  • LightGBM
  • Pytorch Lightning
  • Scikit-learn
  • Spark
  • Statsmodels
  • XGBoost

View autologged parameters and metrics

Use the Vertex AI SDK for Python to compare runs and get runs data. The Google Cloud console provides an easy way to compare these runs.

Relevant notebook sample

Blog post