Introduction to Vertex AI Pipelines

Vertex AI Pipelines lets you automate, monitor, and govern your machine learning (ML) systems in a serverless manner by using ML pipelines to orchestrate your ML workflows. You can batch run ML pipelines defined using the Kubeflow Pipelines (Kubeflow Pipelines) or the TensorFlow Extended (TFX) framework. To learn how to choose a framework for defining your ML pipeline, see Interfaces to define a pipeline.

This page provides an overview of the following:

What is an ML pipeline?

An ML pipeline is a portable and extensible description of an MLOps workflow as a series of steps called pipeline tasks. Each task performs a specific step in the workflow to train and/or deploy an ML model.

With ML pipelines, you can apply MLOps strategies to automate and monitor repeatable processes in your ML practice. For example, you can reuse a pipeline definition to continuously retrain a model on the latest production data. For more information about MLOps in Vertex AI, see MLOps on Vertex AI.

Structure of an ML pipeline

An ML pipeline is a directed acyclic graph (DAG) of containerized pipeline tasks that are interconnected using input-output dependencies. You can author each task either in Python or as a prebuilt container images.

You can define the pipeline as a DAG using either the Kubeflow Pipelines SDK or the TFX SDK, compile it to its YAML for intermediate representation, and then run the pipeline. By default, pipeline tasks run in parallel. You can link the tasks to execute them in series. For more information about pipeline tasks, see Pipeline task. For more information about the workflow for defining, compiling, and running the pipeline, see Life cycle of an ML pipeline.

Pipeline tasks and components

A pipeline task is an instantiation of a pipeline component with specific inputs. While defining your ML pipeline, you can interconnect multiple tasks to form a DAG, by routing the outputs of one pipeline task to the inputs for the next pipeline task in the ML workflow. You can also use the inputs for the ML pipeline as the inputs for a pipeline task.

Pipeline component

A pipeline component is a self-contained set of code that performs a specific step of an ML workflow, such as data preprocessing, model training, or model deployment. A component typically consists of the following:

  • Inputs: A component might have one or more input parameters and artifacts.

  • Outputs: Every component has one or more output parameters or artifacts.

  • Logic: This is the component's executable code. For containerized components, the logic also contains the definition of the environment, or container image, where the component runs.

Components are the basis of defining tasks in an ML pipeline. To define pipeline tasks, you can either use predefined Google Cloud Pipeline Components or create your own custom components.

Predefined components

Use predefined Google Cloud Pipeline Components if you want to use features of Vertex AI, such as AutoML, in your pipeline. To learn how to use Google Cloud Pipeline Components to define a pipeline, see Build a Pipeline.

Custom components

You can author your own custom components to use in your ML pipeline. For more information about authoring custom components, see Build your own pipeline components.

To learn how to author custom Kubeflow Pipelines components, see the "Pipelines with lightweight components based on Python functions" Jupyter notebook on GitHub. To learn how to author custom TFX components, see the TFX Python function component tutorial on the TensorFlow Extended in Production tutorials.

Pipeline task

A pipeline task is the instantiation of a pipeline component and performs a specific step in your ML workflow. You can author ML pipeline tasks either using Python or as prebuilt container images.

Within a task, you can build on the on-demand compute capabilities of Vertex AI with Kubernetes to scalably execute your code, or delegate your workload to another execution engine, such as BigQuery, Dataflow, or Dataproc Serverless.

Life cycle of an ML pipeline

From definition to execution and monitoring, the life cycle of an ML pipeline comprises the following high-level stages:

  1. Define: The process of defining an ML pipeline and its task is also called building a pipeline. In this stage, you need to perform the following steps:

    1. Choose an ML framework: Vertex AI Pipelines supports ML pipelines defined using the TFX or Kubeflow Pipelines framework. To learn how to choose a framework for building your pipeline, see Interfaces to define a pipeline.

    2. Define pipeline tasks and configure pipeline: For more information, see Build a Pipeline.

  2. Compile: In this stage, you need to perform the following steps:

    1. Generate your ML pipeline definition in a compiled YAML file for intermediate representation, which you can use to run your ML pipeline.

    2. Optional: You can upload the compiled YAML file as a pipeline template to a repository and reuse it to create ML pipeline runs.

  3. Run: Create an execution instance of your ML pipeline using the compiled YAML file or a pipeline template. The execution instance of a pipeline definition is called a pipeline run.

    You can create a one-time occurrence of a pipeline run or use the Vertex AI Scheduler to create recurring pipeline runs from the same ML pipeline definition. You can also clone an existing pipeline run. To learn how to choose an interface to run an ML pipeline, see Interfaces to run a pipeline. For more information about how to create a pipeline run, see Run a pipeline.

  4. Monitor, visualize, and analyze runs: After you create a pipeline run, you can do the following to monitor the performance, status, and costs of pipeline runs:

  5. Optional: stop or delete pipeline runs: There is no restriction on how long you can keep a pipeline run active. You can optionally do the following:

    • Stop a pipeline run.

    • Pause or resume a pipeline run schedule.

    • Delete an existing pipeline template, pipeline run, or pipeline run schedule.

What is a pipeline run?

A pipeline run is an execution instance of your ML pipeline definition. Each pipeline run is identified by a unique run name. Using Vertex AI Pipelines, you can create an ML pipeline run in the following ways:

  • Use the compiled YAML definition of a pipeline

  • Use a pipeline template from the Template Gallery

For more information about how to create a pipeline run, see Run a pipeline. For more information about how to create a pipeline run from a pipeline template, see Create, upload, and use a pipeline template.

For information about tracking and storing pipeline run artifacts and metadata using Vertex ML Metadata, see Use Vertex ML Metadata to track the lineage of ML artifacts.

For information about using pipeline runs to experiment on your ML workflow using Vertex AI Experiments, see Add your pipeline runs to experiments.

Use Vertex ML Metadata to track the lineage of ML artifacts

A pipeline run contains several artifacts and parameters, including pipeline metadata. To understand changes in the performance or accuracy of your ML system, you need to analyze the metadata and the lineage of ML artifacts from your ML pipeline runs. The lineage of an ML artifact includes all the factors that contributed to its creation, along with the artifacts and metadata that are derived from it.

Managing this metadata in an ad-hoc manner can be difficult and time-consuming. You can use Vertex ML Metadata to maintain this metadata efficiently. When you run an ML pipeline using Vertex AI Pipelines, the artifacts and metadata from the pipeline run are stored using Vertex ML Metadata.

For more information about tracking the lineage of ML artifacts using Vertex ML Metadata, see Track the lineage of pipeline artifacts.

For more information about visualizing, analyzing, and comparing pipeline runs, see Visualize and analyze pipeline results. For a list of first-party artifact types defined in Google Cloud Pipeline Components, see ML Metadata artifact types.

Add pipeline runs to experiments

Vertex AI Experiments lets you track and analyze various model architectures, hyperparameters, and training environments to find the best model for your ML use case. After you create an ML pipeline run, you can associate it with an experiment or experiment run. By doing so, you can experiment with different sets of variables, such as hyperparameters, number of training steps, or iterations.

For more information about experimenting with ML workflows using Vertex AI Experiments, see Introduction to Vertex AI Experiments.

What's next