AI Platform Pipelines provides a platform that you can use to automate your machine learning (ML) workflow as a pipeline. By running your ML process as a pipeline, you can:
- Run pipelines on an ad-hoc basis.
- Schedule recurring runs, to retrain your model on a regular basis.
- Experiment by running your pipeline with different sets of hyperparameters, number of training steps or iterations, etc. Then compare the results of your experiments.
This guide describes how to run a pipeline and schedule recurring runs. This guide also provides resources that you can use to learn more about the Kubeflow Pipelines user interface.
Before you begin
This guide describes how to use the Kubeflow Pipelines user interface to run a pipeline. Before you can run a pipeline, you must set up your AI Platform Pipelines cluster and ensure that you have sufficient permissions to access your AI Platform Pipelines cluster.
Run an ML pipeline
Use the following instructions to run an ML pipeline on your AI Platform Pipelines cluster.
Open AI Platform Pipelines in the Google Cloud console.
Click Open pipelines dashboard for your Kubeflow Pipelines cluster. The Kubeflow Pipelines user interface opens in a new tab.
In the left navigation panel, click Pipelines.
Click the name of the pipeline that you want to run. If you have not yet loaded a pipeline, click the name of an example pipeline like [Demo] TFX - Taxi Tip Prediction Model Trainer. A graph displaying the steps in the pipeline opens.
To run or schedule the pipeline, click Create run. A form where you can enter the run details opens.
Before you run a pipeline you must specify the run details, run type, and run parameters.
In the Run details section, specify the following:
- Pipeline: Select the pipeline that you want to run.
- Pipeline Version: Select the version of the pipeline that you want to run.
- Run name: Enter a unique name for this run. You can use the name to find this run later.
- Description: (Optional) Enter a description to provide more information about this run.
- Experiment: (Optional) To group related runs together, select an experiment.
In the Run type section, indicate how frequently this run should be executed.
- Select if this is a One-off or Recurring run.
If this is a recurring run, specify the run trigger:
- Trigger type: Select if this run is triggered on a periodic basis, or based on a cron schedule.
- Maximum concurrent runs: Enter the maximum number of runs that can be active at one time.
- Has start date: Check Has start date, then enter the Start date and Start time to specify when this trigger should start creating runs.
- Has end date: Check Has end date, then enter the End date and End time to specify when this trigger should stop creating runs.
- Run every: Select the frequency for triggering new runs. If this run is triggered based on a cron schedule, check Allow editing cron expression to enter a cron expression directly.
In the Run parameters, customize the pipeline parameters for this run. You can use parameters to set values such as paths for loading training data or storing artifacts, hyperparameters, the number of training iterations, etc. A pipeline's parameters are defined when the pipeline is built.
If you are running the [Demo] TFX - Taxi Tip Prediction Model Trainer pipeline, specify the following:
pipeline-root: The pipeline-root parameter specifies where the output of the pipeline should be stored. This pipeline saves run artifacts to the AI Platform Pipelines default Cloud Storage bucket.
You can override this value to specify the path to a different Cloud Storage bucket that your cluster can access. Learn more about creating a Cloud Storage bucket.
data-root: The data-root parameter specifies the path to the pipeline's training data. Use the default value.
module-file: The module-file parameter specifies the path to the source code for a module used in this pipeline. Use the default value.
By loading code from a Cloud Storage bucket, you can quickly change the behavior of a component without rebuilding the component's container image.
Click Start. The pipelines dashboard displays a list of pipeline runs.
Click the name of your run in the list of pipeline runs. The graph of your run is displayed. While your run is still in progress, the graph changes as each step executes.
Click the pipeline steps to explore your run's inputs, outputs, logs, etc.
Understanding the Kubeflow Pipelines user interface
Use the following resources to learn more about the Kubeflow Pipelines user interface.
- Learn more about the goals and main concepts of Kubeflow Pipelines
- Read an overview of the Kubeflow Pipelines interfaces
- Learn more about the terminology used in Kubeflow Pipelines
What's next
- Orchestrate your ML process as a pipeline.
- Learn how to connect to your AI Platform Pipelines cluster using the Kubeflow Pipelines SDK.