As a Data Scientist, this is a common workflow: Train a model locally (in my Notebook), log the parameters, log the training time series metrics to TensorBoard, and log the evaluation metrics.
As a Data Scientists, I want to be able to reuse data pre-processing code that others within my company have written to simplify and standardize all the complex data wrangling that we do. I want to be able to:
- Use a Python data pre-processing library to clean up an in memory dataset (a Pandas Dataframe), in a notebook.
- Train a model using Keras (again in a notebook).
Notebook: Model experimentation with preprocessed data
In the "Build Vertex AI Experiment lineage for custom training" notebook, you will learn how to integrate preprocessing code in Vertex AI Experiments. Also, you will build the experiment lineage that lets you record, analyze, debug, and audit metadata and artifacts produced along your ML journey.
You can view the artifact lineage in the Google Cloud console.
- Manually log data to an experiment run