AI Platform Pipelines saves you the difficulty of setting up Kubeflow Pipelines with TensorFlow Extended on Google Kubernetes Engine. To upgrade your AI Platform Pipelines cluster, you must remove your existing AI Platform Pipelines cluster, and then reinstall it using the same storage settings. This guide describes:
- How to determine what storage method your cluster uses.
- How to collect the information you need to upgrade your cluster.
- The steps required to upgrade your AI Platform Pipelines cluster to a more recent release of Kubeflow Pipelines.
Preparing to upgrade your cluster
The process to upgrade your AI Platform Pipelines cluster depends on the method that your cluster uses to store pipeline artifacts and metadata. Use the following instructions to determine where your AI Platform Pipelines cluster stores pipeline artifacts and metadata.
Open AI Platform Pipelines in the Google Cloud console.
In the row for your AI Platform Pipelines cluster, note the values of the Name, Cluster, Zone, and Namespace columns. This information is used in coming steps.
Open a Cloud Shell session.
Cloud Shell opens in a frame at the bottom of the Google Cloud console.
Run the following commands to configure
kubectl
with access to your GKE cluster and set the context to the namespace where Kubeflow Pipelines was installed.gcloud container clusters get-credentials CLUSTER_NAME --zone=ZONE kubectl config set-context --current --namespace=NAMESPACE
Replace the following:
- CLUSTER_NAME: The name of your GKE cluster, which you noted in an earlier step.
- ZONE: The zone your GKE cluster resides in, which you noted in an earlier step.
- NAMESPACE: The namespace where Kubeflow Pipelines was installed, which you noted in an earlier step.
AI Platform Pipelines stores your cluster's pipeline artifacts and metadata using managed storage services or on-cluster persistent disks. When you upgrade your cluster, you must reinstall AI Platform Pipelines using the same storage settings that your current AI Platform Pipelines cluster uses.
Run the following command in Cloud Shell to check if your cluster was deployed with on-cluster storage.
kubectl get pvc -o json | jq -r '.items[].metadata.name'
This command lists your Google Kubernetes Engine cluster's persistent volume claims (PVC).
- If this list contains mysql-pv-claim and minio-pvc, your AI Platform Pipelines cluster was deployed using on-cluster storage. Upgrade an AI Platform Pipelines cluster that uses on-cluster storage.
- Otherwise, your cluster was deployed using managed storage. Upgrade an AI Platform Pipelines cluster that uses managed storage.
Upgrading an AI Platform Pipelines cluster that uses managed storage
Use the following instructions to back up your AI Platform Pipelines cluster's artifacts and metadata, and upgrade the cluster to a more recent release of Kubeflow Pipelines.
When you upgrade a Kubeflow Pipelines cluster, you must reuse the same storage configuration. If your cluster was deployed with managed storage, use the following instructions to find the configuration details that are required to upgrade your cluster.
Run the following command in Cloud Shell to get the name of the bucket where your cluster stores pipeline artifacts:
kubectl get configmap workflow-controller-configmap -o json | \ jq -r '.data.config | capture("bucket: '"'(?<name>.*?)'"'").name'
Run the following command in Cloud Shell to get the instance connection name of the Cloud SQL instance where your cluster stores pipeline metadata.
kubectl get deployments cloudsqlproxy -o json | \ jq -r '.spec.template.spec.containers[].command[] | capture("instances=(?<name>.*)=").name'
Kubeflow Pipelines depends on two MySQL databases. Run the following command in Cloud Shell to get the database prefix of your cluster's databases.
kubectl get configmap metadata-mysql-configmap -o json | \ jq -r '.data.MYSQL_DATABASE | capture("(?<prefix>.*?)_metadata").prefix'
You must also specify the username and password of a MySQL account with the
ALL
privilege that Kubeflow Pipelines can use to connect to your Cloud SQL instance. If you do not know which MySQL user account your cluster uses, use Cloud SQL to create a MySQL user.
Use Cloud SQL to create a backup of your AI Platform Pipelines cluster's MySQL databases.
Open AI Platform Pipelines in the Google Cloud console.
Use the following instructions to delete your AI Platform Pipelines cluster. To upgrade your AI Platform Pipelines cluster, you must reinstall Kubeflow Pipelines with the same managed storage settings.
Select the checkbox for your AI Platform Pipelines cluster.
In the AI Platform Pipelines toolbar, click Delete. The Delete Kubeflow Pipelines from cluster dialog appears.
Click Delete. Deleting your AI Platform Pipelines cluster may take several minutes.
Use the following instructions to reinstall Kubeflow Pipelines.
In the AI Platform Pipelines toolbar, click New instance. Kubeflow Pipelines opens in Google Cloud Marketplace.
Click Configure. A form opens for you to configure your Kubeflow Pipelines deployment.
Select the Cluster to deploy Kubeflow Pipelines to. This cluster does not need to be the same GKE cluster that your previous AI Platform Pipelines instance was deployed to.
Learn how to ensure that your GKE cluster is configured correctly for AI Platform Pipelines.
In the App instance name box, enter the application instance name that your previous Kubeflow Pipelines instance used.
Namespaces are used to manage resources in large GKE clusters. If you do not plan to use namespaces in your cluster, select default in the Namespace drop-down list.
If you plan to use namespaces in your GKE cluster, create a namespace using the Namespace drop-down list. To create a namespace:
Select Create a namespace in the Namespace drop-down list. The New namespace name box appears.
Enter the namespace name in New namespace name.
To learn more about namespaces, read a blog post about organizing Kubernetes with namespaces.
Managed storage lets you store your ML pipeline's metadata and artifacts using Cloud SQL and Cloud Storage. Select Use managed storage and supply the following information:
Artifact storage Cloud Storage bucket: Specify the bucket name which you found in a previous step.
Cloud SQL instance connection name: Specify the instance connection name which you found in a previous step.
Database username: Specify the database username for Kubeflow Pipelines to use when connecting to your MySQL instance. Currently, your database user must have
ALL
MySQL privileges to deploy Kubeflow Pipelines with managed storage. If you leave this field empty, this value defaults to root.Database password: Specify the database password for Kubeflow Pipelines to use when connecting to your MySQL instance. If you leave this field empty, Kubeflow Pipelines connects to your database without providing a password, which fails if a password is required for the username you specified.
Database name prefix: Specify the database name prefix which you found in a previous step.
Click Deploy. This step may take several minutes.
To access the pipelines dashboard, open AI Platform Pipelines in the Google Cloud console.
Then, click Open pipelines dashboard for your AI Platform Pipelines instance.
Upgrading an AI Platform Pipelines cluster that uses on-cluster storage
Use the following instructions to back up your AI Platform Pipelines cluster's artifacts and metadata, and upgrade the cluster to a more recent release of Kubeflow Pipelines.
Back up your AI Platform Pipelines cluster's metadata and artifact storage. With on-cluster storage, your pipeline artifacts and metadata are stored on Compute Engine persistent disks, which are attached to your GKE cluster as persistent volume claims.
To perform this task, you must have the following permissions:
compute.disks.create
on the projectcompute.disks.useReadOnly
on the source disk
For example, the
roles/compute.storageAdmin
role provides these permissions. Learn more about granting Identity and Access Management permissions and roles.Run the following command in Cloud Shell to list your cluster's PVCs and, where appropriate, the PVC's Compute Engine persistent disk name.
kubectl get pv -o json | \ jq -r '.items[] | .spec.claimRef.name + " - disk name = " + .spec.gcePersistentDisk.pdName'
If your cluster uses on-cluster storage, this list should contain persistent disk names for the mysql-pv-claim and minio-pvc PVCs.
To back up your pipeline artifacts and metadata, run the following command in Cloud Shell for the mysql-pv-claim and minio-pvc persistent disks.
gcloud compute disks create target-disk-name --zone=ZONE --source-disk=source-disk-name
Replace the following:
- target-disk-name: Specify a name for your backup disk.
- ZONE: Specify the zone that your cluster resides in.
- source-disk-name: Specify the name of the persistent disk that you want to back up.
Open AI Platform Pipelines in the Google Cloud console.
Use the following instructions to delete your AI Platform Pipelines cluster, without deleting your GKE cluster. To upgrade your AI Platform Pipelines cluster, you must reinstall Kubeflow Pipelines on the same GKE cluster.
Select the checkbox for your AI Platform Pipelines cluster.
In the AI Platform Pipelines toolbar, click Delete. The Delete Kubeflow Pipelines from cluster dialog appears.
Click Delete. Deleting your AI Platform Pipelines cluster may take several minutes.
Use the following instructions to reinstall Kubeflow Pipelines on your existing GKE cluster.
In the AI Platform Pipelines toolbar, click New instance. Kubeflow Pipelines opens in Google Cloud Marketplace.
Click Configure. A form opens for you to configure your Kubeflow Pipelines deployment.
In the Cluster drop-down list, select the cluster that your previous instance of Kubeflow Pipelines was deployed in.
Select the Namespace that your previous instance of Kubeflow Pipelines was deployed in.
In the App instance name box, enter the application instance name that your previous Kubeflow Pipelines instance used.
Click Deploy. This step may take several minutes.
To access the pipelines dashboard, open AI Platform Pipelines in the Google Cloud console.
Then, click Open pipelines dashboard for your AI Platform Pipelines instance.
What's next
- Orchestrate your ML process as a pipeline.
- Learn how to run your ML pipelines.
- Learn how to connect to your AI Platform Pipelines cluster using the Kubeflow Pipelines SDK.