Running a machine learning pipeline

AI Platform Pipelines provides a platform that you can use to automate your machine learning (ML) workflow as a pipeline. By running your ML process as a pipeline, you can:

  • Run pipelines on an ad-hoc basis.
  • Schedule recurring runs, to retrain your model on a regular basis.
  • Experiment by running your pipeline with different sets of hyperparameters, number of training steps or iterations, etc. Then compare the results of your experiments.

This guide describes how to run a pipeline and schedule recurring runs. This guide also provides resources that you can use to learn more about the Kubeflow Pipelines user interface.

Before you begin

This guide describes how to use the Kubeflow Pipelines user interface to run a pipeline. Before you can run a pipeline, you must set up your AI Platform Pipelines cluster and ensure that you have sufficient permissions to access your AI Platform Pipelines cluster.

Run an ML pipeline

Use the following instructions to run an ML pipeline on your AI Platform Pipelines cluster.

  1. Open AI Platform Pipelines in the Google Cloud console.

    Go to AI Platform Pipelines

  2. Click Open pipelines dashboard for your Kubeflow Pipelines cluster. The Kubeflow Pipelines user interface opens in a new tab.

  3. In the left navigation panel, click Pipelines.

  4. Click the name of the pipeline that you want to run. If you have not yet loaded a pipeline, click the name of an example pipeline like [Demo] TFX - Taxi Tip Prediction Model Trainer. A graph displaying the steps in the pipeline opens.

  5. To run or schedule the pipeline, click Create run. A form where you can enter the run details opens.

  6. Before you run a pipeline you must specify the run details, run type, and run parameters.

    • In the Run details section, specify the following:

      1. Pipeline: Select the pipeline that you want to run.
      2. Pipeline Version: Select the version of the pipeline that you want to run.
      3. Run name: Enter a unique name for this run. You can use the name to find this run later.
      4. Description: (Optional) Enter a description to provide more information about this run.
      5. Experiment: (Optional) To group related runs together, select an experiment.
    • In the Run type section, indicate how frequently this run should be executed.

      1. Select if this is a One-off or Recurring run.
      2. If this is a recurring run, specify the run trigger:

        1. Trigger type: Select if this run is triggered on a periodic basis, or based on a cron schedule.
        2. Maximum concurrent runs: Enter the maximum number of runs that can be active at one time.
        3. Has start date: Check Has start date, then enter the Start date and Start time to specify when this trigger should start creating runs.
        4. Has end date: Check Has end date, then enter the End date and End time to specify when this trigger should stop creating runs.
        5. Run every: Select the frequency for triggering new runs. If this run is triggered based on a cron schedule, check Allow editing cron expression to enter a cron expression directly.
    • In the Run parameters, customize the pipeline parameters for this run. You can use parameters to set values such as paths for loading training data or storing artifacts, hyperparameters, the number of training iterations, etc. A pipeline's parameters are defined when the pipeline is built.

      If you are running the [Demo] TFX - Taxi Tip Prediction Model Trainer pipeline, specify the following:

      1. pipeline-root: The pipeline-root parameter specifies where the output of the pipeline should be stored. This pipeline saves run artifacts to the AI Platform Pipelines default Cloud Storage bucket.

        You can override this value to specify the path to a different Cloud Storage bucket that your cluster can access. Learn more about creating a Cloud Storage bucket.

      2. data-root: The data-root parameter specifies the path to the pipeline's training data. Use the default value.

      3. module-file: The module-file parameter specifies the path to the source code for a module used in this pipeline. Use the default value.

        By loading code from a Cloud Storage bucket, you can quickly change the behavior of a component without rebuilding the component's container image.

  7. Click Start. The pipelines dashboard displays a list of pipeline runs.

  8. Click the name of your run in the list of pipeline runs. The graph of your run is displayed. While your run is still in progress, the graph changes as each step executes.

  9. Click the pipeline steps to explore your run's inputs, outputs, logs, etc.

Understanding the Kubeflow Pipelines user interface

Use the following resources to learn more about the Kubeflow Pipelines user interface.

What's next