Machine learning (ML) workflows include steps to prepare and analyze data, train and evaluate models, deploy trained models to production, track ML artifacts and understand their dependencies, etc. Managing these steps in an ad-hoc manner can be difficult and time-consuming.
MLOps is the practice of applying DevOps practices to help automate, manage, and audit ML workflows. AI Platform Pipelines helps you implement MLOps by providing a platform where you can orchestrate the steps in your workflow as a pipeline. ML pipelines are portable and reproducible definitions of ML workflows.
AI Platform Pipelines makes it easier to get started with MLOps by saving you the difficulty of setting up Kubeflow Pipelines with TensorFlow Extended (TFX). Kubeflow Pipelines is an open source platform for running, monitoring, auditing, and managing ML pipelines on Kubernetes. TFX is an open source project for building ML pipelines that orchestrate end-to-end ML workflows.
Getting started
How-to guides
-
Setting up AI Platform Pipelines
Learn how to set up AI Platform Pipelines.
-
Creating an ML pipeline
Learn how to orchestrate your ML process as a pipeline.
-
Running an ML pipeline
Learn how to access the Kubeflow Pipelines dashboard and run pipelines.
-
Connecting to AI Platform Pipelines using the Kubeflow Pipelines SDK
Learn how to connect to your AI Platform Pipelines cluster using the Kubeflow Pipelines SDK.
-
Configuring your GKE cluster
Configure your Google Kubernetes Engine cluster to ensure that AI Platform Pipelines has sufficient computational resources and access to Google Cloud resources, such as Cloud Storage or BigQuery.