Introduction to Vertex AI Experiments

Stay organized with collections Save and categorize content based on your preferences.

The goal when developing a model for a problem is to identify the best model for that particular use case. To this end, Vertex AI Experiments enables you to track and analyze different model architectures, hyper-parameters, and training environments.

Experiment runs do not incur additional charges. You're only charged for resources that you use during your experiment as described in Vertex AI pricing.

What do you want to do? Check out notebook sample
track metrics and parameters Compare models
track experiment lineage Model training
track pipeline runs Compare pipeline runs

Track steps, inputs, and outputs

Vertex AI Experiments enables you to track

  • steps of an experiment run, for example, preprocessing, training,
  • inputs, for example, algorithm, parameters, datasets,
  • outputs of those steps, for example, models, checkpoints, metrics.

You can then figure out what worked, what did not, and identify further avenues for experimentation.

For user journey examples, check out:

Analyze model performance

Vertex AI Experiments enables you to track and evaluate how their model performed in aggregate, against test datasets, and during the training run. This ability helps to understand the performance characteristics of their models -- how well a particular model works overall, where it fails, and where the model excels.

For user journey examples, check out:

Compare model performance

Vertex AI Experiments enables you to group and compare multiple models across experiment runs. Each model has its own specified parameters, modeling techniques, architectures, and input. This approach helps select the best model.

For user journey examples, check out:

Search experiments

The Google Cloud console provides a centralized view of experiments, a cross-sectional view of the experiment runs, and the details for each run. The Vertex AI SDK for Python provides APIs to consume experiments, experiment runs, experiment run parameters, metrics, and artifacts.

Vertex AI Experiments, along with Vertex ML Metadata, provides a way to find the artifacts tracked in an experiment so you can quickly view the artifact's lineage, and the artifacts consumed and produced by steps in a run.

Scope of support

Vertex AI Experiments supports development of models using Vertex AI custom training, Vertex AI Workbench notebooks, Notebooks, and all Python ML Frameworks across most ML Frameworks. For some ML frameworks, such as TensorFlow, Vertex AI Experiments provides deep integrations into the framework that makes the user experience automagical. For other ML frameworks, Vertex AI Experiments provides a framework neutral Vertex AI SDK for Python that you can use. (see: Pre-built containers for TensorFlow, scikit-learn, PyTorch, XGBoost).

Data models and concepts

Vertex AI Experiments is a context in Vertex ML Metadata where an experiment can contain n experiment runs in addition to n pipeline runs. An experiment run consists of parameters, summary metrics, time series metrics, Vertex resources (PipelineJob), artifacts, and executions. Vertex AI TensorBoard, a managed version of open source TensorBoard, is used for time-series metrics storage. Executions and artifacts of a pipeline run are viewable in the Google Cloud console.

Vertex AI Experiments terms

Experiment, experiment run, and pipeline run

experiment
An experiment is a context that can contain a set of n experiment runs in addition to pipeline runs where a user can investigate, as a group, different configurations such as input artifacts or hyperparameters.
See Create an experiment.

experiment run
An experiment run can contain user-defined metrics, parameters, executions, artifacts, and Vertex resources (for example, PipelineJob).
See Create and manage experiment runs.

pipeline run
One or more Vertex PipelineJobs can be associated with an experiment where each PipelineJob is represented as a single run. In this context, the parameters of the run are inferred by the parameters of the PipelineJob. The metrics are inferred from the system.Metric artifacts produced by that PipelineJob. The artifacts of the run are inferred from artifacts produced by that PipelineJob.
One or more Vertex PipelineJobs can be associated with an ExperimentRun. In this context, the parameters, metrics, and artifacts are not inferred.

See Associate a pipeline with an experiment.

Parameters and metrics

parameters
Parameters are keyed input values that configure a run, regulate the behavior of the run, and affect the results of the run. Examples include learning rate, dropout rate, and number of training steps.

See Log parameters.

summary metrics
Summary metrics are a single value for each metric key in an experiment run. For example, the test accuracy of an experiment is the accuracy calculated against a test dataset at the end of training that can be captured as a single value summary metric.

See Log summary metrics.

time series metrics
Time series metrics are longitudinal metric values where each value represents a step in the training routine portion of a run. Time series metrics are stored in Vertex AI TensorBoard. Vertex AI Experiments stores a reference to the Vertex TensorBoard resource.

See Log time series metrics.

Resource types

pipeline job
A resource in the Vertex AI API corresponding to Vertex Pipeline Jobs. Users create a PipelineJob when they want to run an ML Pipeline on Vertex AI.

artifact
An artifact is a discrete entity or piece of data produced and consumed by a machine learning workflow. Examples of artifacts include datasets, models, input files, and training logs.

Vertex AI Experiments lets you to define the type of artifact, for example, supported types include system.Dataset, system.Model, system.Artifact.

What's next