Deploy to GKE Enterprise user clusters

This document describes how to deploy your applications to GKE Enterprise clusters. Support for GKE Enterprise targets enables deployment to AWS, Azure, and on-premises clusters.

Cloud Deploy lets you deploy your container-based workloads to any GKE Enterprise user cluster that you can access using Connect gateway.

Before you begin

  • Have a GKE Enterprise user cluster that you will deploy to.

    This cluster can be one which you created as an GKE Enterprise user cluster, or you can register an existing Kubernetes cluster. Clusters which you create for GKE Enterprise automatically receive memberships. For existing clusters which you register to a fleet, you designate a membership name when registering. You will need this membership name for the target configuration.

    If you're using Google Cloud CLI version 407.0.0 or newer, you need to include the --install-connect-agent flag on the gcloud container fleet memberships register command, when you register a Google Kubernetes Engine cluster. The Connect agent is no longer installed by default.

  • Set up Connect gateway to connect the registered cluster or clusters to Google Cloud.

    Be sure to set up the gateway using the same service account that will be used as the Cloud Deploy execution service account. If you don't, then the execution service account won't have the necessary permissions to deploy to the GKE Enterprise cluster.

Set up your Cloud Deploy to deploy to GKE Enterprise

  1. Create your target configuration.

    The target can be configured in your delivery pipeline YAML, or can be in a separate file. Also, you can configure more than one target in the same file, but they must be in different kind: Target stanzas.

  2. Grant the execution service account the roles that it needs so that it can interact with connected clusters through the gateway.

    This grant is necessary whether you're using the default Cloud Deploy service account or a custom service account.

  3. Set up RBAC for the execution service account on the Kubernetes cluster that underlies the Anthos cluster.

  4. Optional: if the underlying cluster is not a GKE cluster, you might need to configure an imagePullSecret to allow your cluster to pull from Artifact Registry.

  5. In the target definition, create an anthosCluster stanza to point to the GKE Enterprise cluster:

    The syntax for specifying an GKE Enterprise cluster is as follows:

    anthosCluster:
     membership: projects/[project_name]/locations/global/memberships/[membership_name]
    

    This GKE Enterprise resource identifier uses the following elements:

    • [project_name] is the name of the Google Cloud project in which you're running this cluster.

      The cluster you're deploying to, including GKE Enterprise clusters, does not need to be in the same project as your delivery pipeline.

    • [membership_name] is the name that you chose when registering the cluster to a fleet.

    For location, all GKE Enterprise cluster memberships are global, so you don't need to change /locations/global/ in this resource identifier.

The following is an example target configuration, pointing to an GKE Enterprise user cluster:

      apiVersion: deploy.cloud.google.com/v1
      kind: Target
      metadata:
       name: qsdev
      description: development cluster
      anthosCluster:
       membership: projects/my-app/locations/global/memberships/my-app-dev-cluster

What's next