Setting up AI Platform Pipelines

With AI Platform Pipelines, you can orchestrate your machine learning (ML) workflows as reusable and reproducible pipelines. AI Platform Pipelines saves you the difficulty of setting up Kubeflow Pipelines with TensorFlow Extended on Google Kubernetes Engine.

This guide describes several options for deploying AI Platform Pipelines on GKE. You can deploy Kubeflow Pipelines on an existing GKE cluster or create a new GKE cluster. If you want to reuse an existing GKE cluster, ensure that your cluster meets the following requirements:

  • Your cluster must have at least 3 nodes. Each node must have at least 2 CPUs and 4 GB of memory available.
  • The cluster's access scope must grant full access to all Cloud APIs, or your cluster must use a custom service account.
  • The cluster must not already have Kubeflow Pipelines installed.

Select the best deployment option for your situation:

Before you begin

Before following this guide, check that your Google Cloud project is correctly set up and that you have sufficient permissions to deploy AI Platform Pipelines.

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  3. Make sure that billing is enabled for your Google Cloud project.

  4. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  5. Make sure that billing is enabled for your Google Cloud project.

  6. Use the following instructions to check if you have been granted the roles required to deploy AI Platform Pipelines.
    1. Open a Cloud Shell session.

      Open Cloud Shell

      Cloud Shell opens in a frame at the bottom of the Google Cloud console.

    2. You must have the Viewer (roles/viewer) and Kubernetes Engine Admin (roles/container.admin) roles on the project, or other roles that include the same permissions such as the Owner (roles/owner) role on the project, to deploy AI Platform Pipelines. Run the following command in Cloud Shell to list the principals that have the Viewer and Kubernetes Engine Admin roles.

      gcloud projects get-iam-policy PROJECT_ID \
        --flatten="bindings[].members" --format="table(bindings.role, bindings.members)" \
        --filter="bindings.role:roles/container.admin OR bindings.role:roles/viewer"

      Replace PROJECT_ID with the ID of your Google Cloud project.

      Use the output of this command to verify that your account has the Viewer and Kubernetes Engine Admin roles.

    3. If you want to grant your cluster granular access, you also must have the Service Account Admin (roles/iam.serviceAccountAdmin) role on the project, or other roles that include the same permissions such as the Editor (roles/editor) or Owner (roles/owner) role on the project. Run the following command in Cloud Shell to list the principals that have the Service Account Admin role.

      gcloud projects get-iam-policy PROJECT_ID \
        --flatten="bindings[].members" --format="table(bindings.role, bindings.members)" \
        --filter="bindings.role:roles/iam.serviceAccountAdmin"

      Replace PROJECT_ID with the ID of your Google Cloud project.

      Use the output of this command to verify that your account has the Service Account Admin role.

    4. If you have not been granted the required roles, contact your Google Cloud project administrator for additional help.

      Learn more about granting Identity and Access Management roles.

Deploy AI Platform Pipelines with full access to Google Cloud

AI Platform Pipelines makes it easier to set up and use Kubeflow Pipelines by creating a GKE cluster for you and deploying Kubeflow Pipelines onto the cluster. When AI Platform Pipelines creates a GKE cluster for you, that cluster uses the default Compute Engine service account. To provide your cluster with full access to the Google Cloud resources and APIs that you have enabled in your project, you can grant your cluster access to the https://www.googleapis.com/auth/cloud-platform access scope. Granting access in this way lets ML pipelines that run on your cluster access Google Cloud APIs, such as AI Platform Training and AI Platform Prediction. While this process makes it easier to set up AI Platform Pipelines, it may grant your pipeline developers excessive access to Google Cloud resources and APIs.

Use the following instructions to deploy AI Platform Pipelines with full access to Google Cloud resources and APIs.

  1. Open AI Platform Pipelines in the Google Cloud console.

    Go to AI Platform Pipelines

  2. In the AI Platform Pipelines toolbar, click New instance. Kubeflow Pipelines opens in Google Cloud Marketplace.

  3. Click Configure. The Deploy Kubeflow Pipelines form opens.

  4. If the Create a new cluster link is displayed, click Create a new cluster. Otherwise, continue to the next step.

  5. Select the Cluster zone where your cluster should be located. For help deciding which zone to use, read the best practices for region selection.

  6. Check Allow access to the following Cloud APIs to grant applications that run on your GKE cluster access to Google Cloud resources. By checking this box, you are granting your cluster access to the https://www.googleapis.com/auth/cloud-platform access scope. This access scope provides full access to the Google Cloud resources that you have enabled in your project. Granting your cluster access to Google Cloud resources in this manner saves you the effort of creating and managing a service account or creating a Kubernetes secret.

  7. Click Create cluster. This step may take several minutes.

  8. Namespaces are used to manage resources in large GKE clusters. If you do not plan to use namespaces in your cluster, select default in the Namespace drop-down list.

    If you plan to use namespaces in your GKE cluster, create a namespace using the Namespace drop-down list. To create a namespace:

    1. Select Create a namespace in the Namespace drop-down list. The New namespace name box appears.
    2. Enter the namespace name in New namespace name.

    To learn more about namespaces, read a blog post about organizing Kubernetes with namespaces.

  9. In the App instance name box, enter a name for your Kubeflow Pipelines instance.

  10. Managed storage lets you store your ML pipeline's metadata and artifacts using Cloud SQL and Cloud Storage, instead of storing them on Compute Engine persistent disks. Using managed services to store your pipeline artifacts and metadata makes it easier to back up and restore your cluster's data. To deploy Kubeflow Pipelines with managed storage, select Use managed storage and supply the following information:

    • Artifact storage Cloud Storage bucket: With managed storage, Kubeflow Pipelines stores pipeline artifacts in a Cloud Storage bucket. Specify the name of the bucket you want Kubeflow Pipelines to store artifacts in. If the specified bucket doesn't exist, the Kubeflow Pipelines deployer automatically creates a bucket for you in the us-central1 region.

      Learn more about creating a new bucket.

    • Cloud SQL instance connection name: With managed storage, Kubeflow Pipelines stores pipeline metadata in a MySQL database on Cloud SQL. Specify the connection name for your Cloud SQL MySQL instance.

      Learn more about setting up your Cloud SQL instance.

    • Database username: Specify the database username for Kubeflow Pipelines to use when connecting to your MySQL instance. Currently, your database user must have ALL MySQL privileges to deploy Kubeflow Pipelines with managed storage. If you leave this field empty, this value defaults to root.

      Learn more about MySQL users.

    • Database password: Specify the database password for Kubeflow Pipelines to use when connecting to your MySQL instance. If you leave this field empty, Kubeflow Pipelines connects to your database without providing a password, which fails if a password is required for the username you specified.

    • Database name prefix: Specify the database name prefix. The prefix value must start with a letter and contain only lowercase letters, numbers, and underscores.

      During the deployment process, Kubeflow Pipelines creates two databases, "DATABASE_NAME_PREFIX_pipeline" and "DATABASE_NAME_PREFIX_metadata". If databases with these names exist in your MySQL instance, Kubeflow Pipelines reuses the existing databases. If this value is not specified, the App instance name is used as the database name prefix.

  11. Click Deploy. This step may take several minutes.

  12. To access the pipelines dashboard, open AI Platform Pipelines in the Google Cloud console.

    Go to AI Platform Pipelines

    Then, click Open pipelines dashboard for your AI Platform Pipelines instance.

Deploy AI Platform Pipelines with granular access to Google Cloud

ML pipelines access Google Cloud resources using the service account and access scope of the GKE cluster's node pool. Currently, to limit your cluster's access to specific Google Cloud resources, you must deploy AI Platform Pipelines onto a GKE cluster that uses a user-managed service account.

Use the instructions in the following sections to create and configure a service account, create a GKE cluster using your service account, and deploy Kubeflow Pipelines onto your GKE cluster.

Create a service account for your GKE cluster

Use the following instructions to set up a service account for your GKE cluster.

  1. Open a Cloud Shell session.

    Open Cloud Shell

    Cloud Shell opens in a frame at the bottom of the Google Cloud console.

  2. Run the following commands in Cloud Shell to create your service account and grant it sufficient access to run AI Platform Pipelines. Learn more about the roles required to run AI Platform Pipelines with a user-managed service account.

    export PROJECT=PROJECT_ID
    export SERVICE_ACCOUNT=SERVICE_ACCOUNT_NAME
    gcloud iam service-accounts create $SERVICE_ACCOUNT \
      --display-name=$SERVICE_ACCOUNT \
      --project=$PROJECT
    gcloud projects add-iam-policy-binding $PROJECT \
      --member="serviceAccount:$SERVICE_ACCOUNT@$PROJECT.iam.gserviceaccount.com" \
      --role=roles/logging.logWriter
    gcloud projects add-iam-policy-binding $PROJECT \
      --member="serviceAccount:$SERVICE_ACCOUNT@$PROJECT.iam.gserviceaccount.com" \
      --role=roles/monitoring.metricWriter
    gcloud projects add-iam-policy-binding $PROJECT \
      --member="serviceAccount:$SERVICE_ACCOUNT@$PROJECT.iam.gserviceaccount.com" \
      --role=roles/monitoring.viewer
    gcloud projects add-iam-policy-binding $PROJECT \
      --member="serviceAccount:$SERVICE_ACCOUNT@$PROJECT.iam.gserviceaccount.com" \
      --role=roles/storage.objectViewer

    Replace the following:

    • SERVICE_ACCOUNT_NAME: The name of the service account to create.
    • PROJECT_ID: The Google Cloud project that the service account is created in.
  3. Grant your service account access to any Google Cloud resources or APIs that your ML pipelines require. Learn more about Identity and Access Management roles and managing service accounts.

  4. Grant your user account the Service Account User (iam.serviceAccountUser) role on your service account.

    gcloud iam service-accounts add-iam-policy-binding \
      "SERVICE_ACCOUNT_NAME@PROJECT_ID.iam.gserviceaccount.com" \
      --member=user:USERNAME \
      --role=roles/iam.serviceAccountUser
    

    Replace the following:

    • SERVICE_ACCOUNT_NAME: The name of your service account.
    • PROJECT_ID: Your Google Cloud project.
    • USERNAME: Your username on Google Cloud.

Set up your GKE cluster

Use the following instructions to set up your GKE cluster.

  1. Open Google Kubernetes Engine in the Google Cloud console.

    Open Google Kubernetes Engine

  2. Click the Create cluster button. The Cluster basics form opens.

  3. Enter the Name for your cluster.

  4. For the Location type, select Zonal, and then select the desired zone for your cluster. For help deciding which zone to use, read the best practices for region selection.

  5. From the navigation pane, under Node Pools, click default-pool. The Node pool details form appears.

  6. Enter the Number of nodes to create in the cluster. Your cluster must have 3 or more nodes to deploy AI Platform Pipelines. You must have available resource quota for the nodes and their resources (such as firewall routes).

  7. From the navigation pane, under Node Pools, click Nodes. The Nodes form opens.

  8. Choose the default Machine configuration to use for the instances. You must select a machine type with at least 2 CPUs and 4 GB of memory, such as n1-standard-2, to deploy AI Platform Pipelines. Each machine type is billed differently. For machine type pricing information, refer to the machine type price sheet.

  9. From the navigation pane, under Node Pools, click Security. The Node security form appears.

  10. From the Service account drop-down list, select the service account that you created earlier in this guide.

  11. Otherwise, configure your GKE cluster as desired. Learn more about creating a GKE cluster.

  12. Click Create.

Install Kubeflow Pipelines on your GKE cluster

Use the following instructions to set up Kubeflow Pipelines on a GKE cluster.

  1. Open AI Platform Pipelines in the Google Cloud console.

    Go to AI Platform Pipelines

  2. In the AI Platform Pipelines toolbar, click New instance. Kubeflow Pipelines opens in Google Cloud Marketplace.

  3. Click Configure. The Deploy Kubeflow Pipelines form opens.

  4. In the Cluster drop-down list, select the cluster you created in an earlier step. If the cluster you want to use is not eligible for deployment, verify that your cluster meets the requirements to deploy Kubeflow Pipelines.

  5. Namespaces are used to manage resources in large GKE clusters. If you do not plan to use namespaces in your cluster, select default in the Namespace drop-down list.

    If you plan to use namespaces in your GKE cluster, create a namespace using the Namespace drop-down list. To create a namespace:

    1. Select Create a namespace in the Namespace drop-down list. The New namespace name box appears.
    2. Enter the namespace name in New namespace name.

    To learn more about namespaces, read a blog post about organizing Kubernetes with namespaces.

  6. In the App instance name box, enter a name for your Kubeflow Pipelines instance.

  7. Managed storage lets you store your ML pipeline's metadata and artifacts using Cloud SQL and Cloud Storage, instead of storing them on Compute Engine persistent disks. Using managed services to store your pipeline artifacts and metadata makes it easier to back up and restore your cluster's data. To deploy Kubeflow Pipelines with managed storage, select Use managed storage and supply the following information:

    • Artifact storage Cloud Storage bucket: With managed storage, Kubeflow Pipelines stores pipeline artifacts in a Cloud Storage bucket. Specify the name of the bucket you want Kubeflow Pipelines to store artifacts in. If the specified bucket doesn't exist, the Kubeflow Pipelines deployer automatically creates a bucket for you in the us-central1 region.

      Learn more about creating a new bucket.

    • Cloud SQL instance connection name: With managed storage, Kubeflow Pipelines stores pipeline metadata in a MySQL database on Cloud SQL. Specify the connection name for your Cloud SQL MySQL instance.

      Learn more about setting up your Cloud SQL instance.

    • Database username: Specify the database username for Kubeflow Pipelines to use when connecting to your MySQL instance. Currently, your database user must have ALL MySQL privileges to deploy Kubeflow Pipelines with managed storage. If you leave this field empty, this value defaults to root.

      Learn more about MySQL users.

    • Database password: Specify the database password for Kubeflow Pipelines to use when connecting to your MySQL instance. If you leave this field empty, Kubeflow Pipelines connects to your database without providing a password, which fails if a password is required for the username you specified.

    • Database name prefix: Specify the database name prefix. The prefix value must start with a letter and contain only lowercase letters, numbers, and underscores.

      During the deployment process, Kubeflow Pipelines creates two databases, "DATABASE_NAME_PREFIX_pipeline" and "DATABASE_NAME_PREFIX_metadata". If databases with these names exist in your MySQL instance, Kubeflow Pipelines reuses the existing databases. If this value is not specified, the App instance name is used as the database name prefix.

  8. Click Deploy. This step may take several minutes.

  9. To access the pipelines dashboard, open AI Platform Pipelines in the Google Cloud console.

    Go to AI Platform Pipelines

    Then, click Open pipelines dashboard for your AI Platform Pipelines instance.

Deploy AI Platform Pipelines to an existing GKE cluster

To use Google Cloud Marketplace to deploy Kubeflow Pipelines on a GKE cluster, the following must be true:

  • Your cluster must have at least 3 nodes. Each node must have at least 2 CPUs and 4 GB of memory available.
  • The cluster's access scope must grant full access to all Cloud APIs, or your cluster must use a custom service account.
  • The cluster must not already have Kubeflow Pipelines installed.

Learn more about configuring your GKE cluster for AI Platform Pipelines.

Use the following instructions to set up Kubeflow Pipelines on a GKE cluster.

  1. Open AI Platform Pipelines in the Google Cloud console.

    Go to AI Platform Pipelines

  2. In the AI Platform Pipelines toolbar, click New instance. Kubeflow Pipelines opens in Google Cloud Marketplace.

  3. Click Configure. The Deploy Kubeflow Pipelines form opens.

  4. In the Cluster drop-down list, select your cluster. If the cluster you want to use is not eligible for deployment, verify that your cluster meets the requirements to deploy Kubeflow Pipelines.

  5. Namespaces are used to manage resources in large GKE clusters. If your cluster does not use namespaces, select default in the Namespace drop-down list.

    If your cluster uses namespaces, select an existing namespace or create a namespace using the Namespace drop-down list. To create a namespace:

    1. Select Create a namespace in the Namespace drop-down list. The New namespace name box appears.
    2. Enter the namespace name in New namespace name.

    To learn more about namespaces, read a blog post about organizing Kubernetes with namespaces.

  6. In the App instance name box, enter a name for your Kubeflow Pipelines instance.

  7. Managed storage lets you store your ML pipeline's metadata and artifacts using Cloud SQL and Cloud Storage, instead of storing them on Compute Engine persistent disks. Using managed services to store your pipeline artifacts and metadata makes it easier to back up and restore your cluster's data. To deploy Kubeflow Pipelines with managed storage, select Use managed storage and supply the following information:

    • Artifact storage Cloud Storage bucket: With managed storage, Kubeflow Pipelines stores pipeline artifacts in a Cloud Storage bucket. Specify the name of the bucket you want Kubeflow Pipelines to store artifacts in. If the specified bucket doesn't exist, the Kubeflow Pipelines deployer automatically creates a bucket for you in the us-central1 region.

      Learn more about creating a new bucket.

    • Cloud SQL instance connection name: With managed storage, Kubeflow Pipelines stores pipeline metadata in a MySQL database on Cloud SQL. Specify the connection name for your Cloud SQL MySQL instance.

      Learn more about setting up your Cloud SQL instance.

    • Database username: Specify the database username for Kubeflow Pipelines to use when connecting to your MySQL instance. Currently, your database user must have ALL MySQL privileges to deploy Kubeflow Pipelines with managed storage. If you leave this field empty, this value defaults to root.

      Learn more about MySQL users.

    • Database password: Specify the database password for Kubeflow Pipelines to use when connecting to your MySQL instance. If you leave this field empty, Kubeflow Pipelines connects to your database without providing a password, which fails if a password is required for the username you specified.

    • Database name prefix: Specify the database name prefix. The prefix value must start with a letter and contain only lowercase letters, numbers, and underscores.

      During the deployment process, Kubeflow Pipelines creates two databases, "DATABASE_NAME_PREFIX_pipeline" and "DATABASE_NAME_PREFIX_metadata". If databases with these names exist in your MySQL instance, Kubeflow Pipelines reuses the existing databases. If this value is not specified, the App instance name is used as the database name prefix.

  8. Click Deploy. This step may take several minutes.

  9. To access the pipelines dashboard, open AI Platform Pipelines in the Google Cloud console.

    Go to AI Platform Pipelines

    Then, click Open pipelines dashboard for your AI Platform Pipelines instance.

What's next