Prerequisites for registering a cluster

This page describes the prerequisites and requirements for registering a Kubernetes cluster to a Google Cloud fleet, including network, Google Cloud and Kubernetes cluster configuration, as well as resource requirements for the Connect agent.

All registrations

The following are prerequisites for manually registering clusters of any type.

Install command line tools

Ensure you have the following command line tools installed. If you are using Cloud Shell as your shell environment for interacting with Google Cloud, these tools are installed for you.

Install the gcloud command-line tool

gcloud command-line tool is the command line interface (CLI) to Google Cloud, and is included in the Cloud SDK. You can register clusters by using gcloud command-line tool or, depending on your cluster type, by using other tools such as Terraform or the Cloud Console. However, even if you don't use it for cluster registration, gcloud is required or useful for many of the other setup steps in this page.

  1. If you don't have it installed already, install the Cloud SDK following the installation instructions. You need version 281.0.0 or higher of Cloud SDK.

  2. Run the following command to log in to Google Cloud:

gcloud auth login
  1. (Optional) Install the gcloud beta component, if you plan to try alpha or beta Connect features:
 gcloud components install beta 

Install kubectl

While kubectl is not required to register a cluster, you may need it to grant the necessary role-based access control (RBAC) permissions to the user registering the cluster (if they are not the cluster owner), as well as for other platform-specific setup. You need a kubectl version no lower than the minimum supported Kubernetes version of Google Kubernetes Engine (GKE).

We recommend installing kubectl with Cloud SDK.

To check the version of kubectl:

kubectl version

The client version is indicated by gitVersion of the output.

To install kubectl:

gcloud components install kubectl

Enable APIs

You need to enable the following APIs in your Google Cloud project:

  • container.googleapis.com
  • gkeconnect.googleapis.com
  • gkehub.googleapis.com
  • cloudresourcemanager.googleapis.com

Pods in your cluster must be able to reach googleapis.com and gkeconnect.googleapis.com addresses, either directly or by using a configured proxy server.

If you want to enable Workload Identity for your registration, you must also enable the following:

  • iam.googleapis.com

Non-project owners must be granted the serviceusage.services.enable permission before they can enable APIs.

gcloud

To enable these APIs, run the following command:

gcloud services enable \
 --project=[FLEET_HOST_PROJECT_ID] \
 container.googleapis.com \
 gkeconnect.googleapis.com \
 gkehub.googleapis.com \
 cloudresourcemanager.googleapis.com \
 iam.googleapis.com

where:

If you don't want to enable fleet Workload Identity, you can omit iam.googleapis.com.

To list APIs you've already enabled in your projects, follow the instructions in Listing Services in the Service Usage documentation.

Grant the required IAM roles to the user registering the cluster

Having roles/owner in your project gives you complete control and allows you to complete all registration tasks.

If you do not have roles/owner in your project, you need to be granted specific IAM roles before you can connect clusters to Google. See Connect IAM roles.

The following IAM roles ensure that you are able to register and connect to clusters using a service account:

  • roles/gkehub.admin
  • roles/iam.serviceAccountAdmin
  • roles/iam.serviceAccountKeyAdmin
  • roles/resourcemanager.projectIamAdmin

For GKE clusters only, you can also add the following role to get admin permissions on the cluster, if you don't have it already (your user account is likely to have it if you created the cluster):

  • roles/container.admin

For GKE clusters, this IAM role includes the Kubernetes role-based access control (RBAC) cluster-admin role. For other cluster environments you need to grant this RBAC role using kubectl, as described in the following section. You can find out more about the relationship between IAM and RBAC roles in GKE in the GKE documentation.

If you are registering a cluster using Workload Identity, you just need the following IAM role:

  • roles/gkehub.admin

gcloud

To grant these roles, run the following command:

gcloud projects add-iam-policy-binding [FLEET_HOST_PROJECT_ID] \
 --member user:[GCP_EMAIL_ADDRESS] \
 --role=roles/gkehub.admin \
 --role=roles/iam.serviceAccountAdmin \
 --role=roles/iam.serviceAccountKeyAdmin \
 --role=roles/resourcemanager.projectIamAdmin

where:

  • [FLEET_HOST_PROJECT_ID] is the Google Cloud project ID in which you want to register clusters. Learn how to find this value.
  • [GCP_EMAIL_ADDRESS] is the account registration users use to log in to Google Cloud.

To learn more about how to grant IAM roles, refer to Granting, Changing, and Revoking Access to Resources in the IAM documentation.

Grant read-only roles to other users

Authenticated users with the following roles are able to view registered user clusters from the Cloud Console. These roles provide read-only access:

  • roles/gkehub.viewer
  • roles/container.viewer

gcloud

To grant a user the required roles, run the following command:

gcloud projects add-iam-policy-binding [FLEET_HOST_PROJECT_ID] \
 --member user:[USER_EMAIL_ADDRESS] \
 --role roles/gkehub.viewer \
 --role=roles/container.viewer

where:

  • [FLEET_HOST_PROJECT_ID] is the Google Cloud project ID in which you want to register clusters. Learn how to find this value.
  • [USER_EMAIL_ADDRESS] is email address of an authenticated user.

Grant the cluster-admin RBAC role to the user registering the cluster

The cluster-admin role-based access control (RBAC) ClusterRole grants the permissions necessary to register your cluster. If the cluster you want to register is on Google Cloud, you can get the same permissions using the IAM roles/container.admin role, as described in Google Cloud setup and configuration.

kubectl

If you created the cluster, you likely have this role. You can verify by running the following command:

kubectl auth can-i '*' '*' --all-namespaces

If you or another user needs the role, create a ClusterRoleBinding resource in the cluster:

kubectl create clusterrolebinding [BINDING_NAME] --clusterrole cluster-admin --user [USER] 

where:

  • [BINDING_NAME] is a name that you choose for the ClusterRoleBinding resource.
  • [USER] is the identity used to authenticate against the cluster.

For more information about the cluster-admin role, refer to the Kubernetes documentation.

GKE clusters on Google Cloud

If you are registering a GKE cluster on Google Cloud, you may need to do one or more of the following before registering the cluster, depending on the registration option you choose.

Enable GKE Workload Identity

If you want to register GKE clusters with fleet Workload Identity enabled, you must ensure GKE Workload Identity is enabled on the cluster before registration. Registering a GKE cluster with fleet Workload Identity without having GKE Workload Identity enabled on the cluster can lead to inconsistencies on how identity is asserted by workloads in the cluster, and is not a supported configuration. You can find out more about the advantages of registering using fleet Workload Identity in Registering a cluster.

gcloud

To check if your GKE cluster has Workload Identity enabled, run the following command to list the cluster's Workload Identity pool:

gcloud container clusters describe GKE_CLUSTER --format="value(workloadIdentityConfig.workloadPool)"

Replace the following:

  • GKE_CLUSTER: the name of the GKE cluster.

If you see a result similar to the following then Workload Identity is already enabled on your GKE cluster:

GKE_PROJECT_ID.svc.id.goog

If there are no results, then Workload Identity is not enabled. To enable Workload Identity, follow the instructions in Enabling Workload Identity.

Grant permissions for registering a cluster into a different project

Registering a GKE cluster to the fleet in its own project does not require any special permission setup beyond that described in All registrations. If you want to register a GKE cluster from its own project (GKE_PROJECT) to a fleet in a different project (FLEET_HOST_PROJECT), however, the FLEET_HOST_PROJECT service agent account gcp-sa-gkehub must have the gkehub.serviceAgent role in the GKE_PROJECT project. This role grants the service account the permissions to manage cluster resources in that project.

You can check if the fleet host project gcp-sa-gkehub service account has the required role in your cluster's project using gcloud tool or the Cloud Console, as follows.

gcloud

Run the following command:

gcloud projects get-iam-policy [GKE_PROJECT_ID]

Console

  1. With your cluster's project selected, go to the IAM & Admin page in the Cloud Console.

Go to the IAM & Admin page 1. Select the Include Google-provided role grants checkbox to view the complete policy, including service agents.

If you see gcp-sa-gkehub, it should have the form service-[FLEET_HOST-PROJECT-NUMBER]@gcp-sa-gkehub.iam.gserviceaccount.com. For example:

  - members:
    - serviceAccount:service-1234567890@gcp-sa-gkehub.iam.gserviceaccount.com
    role: roles/gkehub.serviceAgent

If you don't see the service agent listed in the project's IAM policy, do the following to update the necessary permissions:

gcloud

  1. To grant gcp-sa-gkehub the gkehub.serviceAgent role, first ensure that this service account exists. If you have registered clusters in this project before, then this service account should exist already. You can check by looking at the IAM policy for the fleet host project:

    gcloud projects get-iam-policy [FLEET_HOST_PROJECT_ID]
    
  2. If you need to create the gcp-sa-gkehub service account, run the following command:

    gcloud beta services identity create --service=gkehub.googleapis.com --project=[FLEET_HOST_PROJECT_ID]

    This command should output the following:

    Service identity created: service-[FLEET_HOST_PROJECT_NUMBER]@gcp-sa-gkehub.iam.gserviceaccount.com
    
  3. Run the following command to grant the service account the roles/gkehub.serviceAgent role in both projects:

    GKE_PROJECT_ID=[GKE_PROJECT_ID]
    FLEET_HOST_PROJECT_ID=[FLEET_HOST_PROJECT_ID]
    FLEET_HOST_PROJECT_NUMBER=$(gcloud projects describe "${FLEET_HOST_PROJECT_ID}" --format "value(projectNumber)")
    gcloud projects add-iam-policy-binding "${FLEET_HOST_PROJECT_ID}" \
    --member "serviceAccount:service-${FLEET_HOST_PROJECT_NUMBER}@gcp-sa-gkehub.iam.gserviceaccount.com" \
    --role roles/gkehub.serviceAgent
    gcloud projects add-iam-policy-binding "${GKE_PROJECT_ID}" \
    --member "serviceAccount:service-${FLEET_HOST_PROJECT_NUMBER}@gcp-sa-gkehub.iam.gserviceaccount.com" \
    --role roles/gkehub.serviceAgent

    where:

    • [GKE_PROJECT_ID] is the Google Cloud project ID of the GKE cluster.
    • [FLEET_HOST_PROJECT_ID] is the Google Cloud project ID in which you want to register clusters. Learn how to find this value.
  4. To confirm that the role binding is granted, run the following command again:

    gcloud projects get-iam-policy [GKE_PROJECT_ID]
    

    If you see the service account name along with the gkehub.serviceAgent role, the role binding has been granted. For example:

    - members:
      - serviceAccount:service-[FLEET_HOST_PROJECT_NUMBER]@gcp-sa-gkehub.iam.gserviceaccount.com
      role: roles/gkehub.serviceAgent
    

Set up an identity for use by the Connect Agent

While most registration options for GKE clusters on Google Cloud do not install the Connect Agent on your clusters, you currently need to install the agent to use the Connect gateway. You can choose to do this by registering using the gcloud command-line tool. The Connect Agent needs an identity to authenticate to Google Cloud.

  • Fleet Workload Identity (recommended): Ensure the cluster has GKE Workload Identity enabled before you register the cluster, as described above.
  • Service account: If you choose to use a Google Cloud service account instead of Workload Identity, follow the instructions below to create a service account and download its JSON key.

Configure a service account for Terraform

If you want to use Terraform to register a Google Kubernetes Engine cluster, you need to create a service account that Terraform can use to access the Fleet API to create a membership.

gcloud

  1. Create a service account as follows:

    gcloud iam service-accounts create [SERVICE_ACCOUNT_NAME] --project=[FLEET_HOST_PROJECT_ID]
  2. Bind the gkehub.admin IAM role to the service account so that Terraform can use it with the Fleet API:

    FLEET_HOST_PROJECT_ID=[FLEET_HOST_PROJECT_ID]
    gcloud projects add-iam-policy-binding ${FLEET_HOST_PROJECT_ID} \
     --member="serviceAccount:[SERVICE_ACCOUNT_NAME]@${FLEET_HOST_PROJECT_ID}.iam.gserviceaccount.com" \
     --role="roles/gkehub.admin"
    

    If you want to create a new cluster with Terraform and then register it, you also need to bind roles/container.admin role to the service account so that Terraform can use this service account to access the GKE API to create a cluster.

    FLEET_HOST_PROJECT_ID=[FLEET_HOST_PROJECT_ID]
    gcloud projects add-iam-policy-binding ${FLEET_HOST_PROJECT_ID} \
     --member="serviceAccount:[SERVICE_ACCOUNT_NAME]@${FLEET_HOST_PROJECT_ID}.iam.gserviceaccount.com" \
     --role="roles/container.admin"
    

    Replace the following:

  3. Download the service account's private key JSON file, as described in Create a Google Cloud service account using gcloud. You will need this file to create and register clusters using Terraform.

Configure a service account for Config Connector

If you want to use Config Connector to register a GKE cluster, do the following:

gcloud

  1. Ensure that you have installed the Config Connector add-on. You should have a version of Config Connector above 1.47.0.

  2. Follow the Config Connector instructions to create a service account.

  3. Bind the gkehub.admin IAM role to this service account so that your Config Connector can use this service account to access the Fleet API:

    FLEET_HOST_PROJECT_ID=[FLEET_HOST_PROJECT_ID]
    gcloud projects add-iam-policy-binding ${FLEET_HOST_PROJECT_ID} \
     --member="serviceAccount:[SERVICE_ACCOUNT_NAME]@${FLEET_HOST_PROJECT_ID}.iam.gserviceaccount.com" \
     --role="roles/gkehub.admin"
    

    If you want to create a new cluster with Config Connector and then register it, you also need to bind roles/container.admin role to the service account so that your Config Connector controller can use this service account to access the GKE API to create a cluster.

    FLEET_HOST_PROJECT_ID=[FLEET_HOST_PROJECT_ID]
    gcloud projects add-iam-policy-binding ${FLEET_HOST_PROJECT_ID} \
     --member="serviceAccount:[SERVICE_ACCOUNT_NAME]@${FLEET_HOST_PROJECT_ID}.iam.gserviceaccount.com" \
     --role="roles/container.admin"
    

    Replace the following:

  4. Follow the Config Connector instruction to configure your Config Connector with this service account.

Clusters outside Google Cloud

All manual registrations outside Google Cloud, such as registering attached clusters, require the following in addition to the prerequisites for all registrations.

Ensure network connectivity

To successfully register your cluster, you need to ensure that the domains below are reachable from your Kubernetes cluster.

  • cloudresourcemanager.googleapis.com resolves metadata regarding the Google Cloud project the cluster is being connected to.
  • oauth2.googleapis.com to obtain short-lived OAuth tokens for agent operations against gkeconnect.googleapis.com.
  • gkeconnect.googleapis.com to establish the channel used to receive requests from Google Cloud and issues responses.
  • gkehub.googleapis.com to create Google Cloud-side Fleet membership resources that corresponds to the cluster you're connecting with Google Cloud.
  • www.googleapis.com to authenticate service tokens from incoming Google Cloud service requests.
  • gcr.io to pull the GKE Connect Agent image.

If you want to register the cluster using Workload Identity, the following domains also must be reachable:

  • securetoken.googleapis.com
  • iamcredentials.googleapis.com
  • sts.googleapis.com

If you're using a proxy for Connect, you must also update the proxy's allowlist with the relevant domains.

If you use gcloud to register your Kubernetes cluster, these domains also need to be reachable in the environment where you run the gcloud commands.

Using VPC Service Controls

If you want to use VPC Service Controls for additional data security in your application, you need to ensure that the following services are in your service perimeter:

  • Resource Manager API (cloudresourcemanager.googleapis.com)
  • GKE Connect API (gkeconnect.googleapis.com)
  • Fleet API (gkehub.googleapis.com)

If you want to register your cluster with fleet Workload Identity enabled, you also need the following services:

  • IAM Service Account Credentials API (iamcredentials.googleapis.com)
  • Security Token Service API (sts.googleapis.com)

You also need to set up private connectivity for access to the relevant APIs. You can find out how to do this in Setting up private connectivity.

Set up identity

All manual cluster registration options outside Google Cloud require you to configure authentication to Google. You can choose between registering a cluster with fleet Workload Identity or register using a Google Cloud Service Account.

Anthos clusters on VMware, Anthos clusters on bare metal, and Anthos clusters on AWS are automatically registered to your project fleet at cluster creation time, with fleet Workload Identity enabled from version 1.8 onwards. Note that for these cluster types you still need to set up a service account for registration—after your initial registration, the Connect Agent uses fleet Workload Identity to authenticate to Google.

Attached clusters can be registered with fleet Workload Identity enabled if the cluster meets our Attached cluster prerequisites, as described below. Otherwise, register attached clusters with a Google Cloud service account for authentication.

Create a Google Cloud service account using gcloud

To manually register a cluster using a Google Cloud service account, you need a JSON file containing the service account credentials. To follow the principle of least privilege, we recommend that you create a distinct service account for each Kubernetes cluster that you register, and only bind IAM roles to it for the corresponding cluster.

To create this file, perform the following steps:

gcloud

Create a service account by running the following command:

gcloud iam service-accounts create [SERVICE_ACCOUNT_NAME] --project=[FLEET_HOST_PROJECT_ID]

List all of a project's service accounts by running the following command:

gcloud iam service-accounts list --project=[FLEET_HOST_PROJECT_ID]

If you are creating a distinct service account for each Kubernetes cluster that you register, bind the gkehub.connect IAM role to the service account for its corresponding cluster with an IAM Condition on the cluster's membership name:

MEMBERSHIP_NAME=[MEMBERSHIP_NAME]
FLEET_HOST_PROJECT_ID=[FLEET_HOST_PROJECT_ID]
SERVICE_ACCOUNT_NAME=[SERVICE_ACCOUNT_NAME]
gcloud projects add-iam-policy-binding ${FLEET_HOST_PROJECT_ID} \
 --member="serviceAccount:${SERVICE_ACCOUNT_NAME}@${FLEET_HOST_PROJECT_ID}.iam.gserviceaccount.com" \
 --role="roles/gkehub.connect" \
 --condition "expression=resource.name == \
'projects/${FLEET_HOST_PROJECT_ID}/locations/global/memberships/${MEMBERSHIP_NAME}',\
title=bind-${SERVICE_ACCOUNT_NAME}-to-${MEMBERSHIP_NAME}"

Otherwise, bind the role to the service account for all clusters in the project without the condition.

FLEET_HOST_PROJECT_ID=[FLEET_HOST_PROJECT_ID]
gcloud projects add-iam-policy-binding ${FLEET_HOST_PROJECT_ID} \
 --member="serviceAccount:[SERVICE_ACCOUNT_NAME]@${FLEET_HOST_PROJECT_ID}.iam.gserviceaccount.com" \
 --role="roles/gkehub.connect"

Download the service account's private key JSON file. You use this file when you Register a cluster:

FLEET_HOST_PROJECT_ID=[FLEET_HOST_PROJECT_ID]
gcloud iam service-accounts keys create [LOCAL_KEY_PATH] \
  --iam-account=[SERVICE_ACCOUNT_NAME]@${FLEET_HOST_PROJECT_ID}.iam.gserviceaccount.com \
  --project=${FLEET_HOST_PROJECT_ID}

where:

  • [FLEET_HOST_PROJECT_ID] is the Google Cloud project ID in which you want to register clusters. Learn how to find this value.
  • [SERVICE_ACCOUNT_NAME] is the display name that you choose for the Service Account.
  • [MEMBERSHIP_NAME] is the membership name that you choose to uniquely represent the cluster while registering it.
  • [LOCAL_KEY_PATH] is a local filepath where you'd like to save the service account's private key, a JSON file. We recommend that you name the file using the service account name and your project ID, such as /tmp/creds/[SERVICE_ACCOUNT_NAME]-[FLEET_HOST_PROJECT_ID].json.

Attached cluster prerequisites

Depending on the type of external cluster you want to register as an attached cluster, you may have to meet some additional requirements to install the Connect Agent and/or use fleet Workload Identity.

Configure Security Context Constraints (SCC) (OpenShift clusters)

On OpenShift OKE and OKD clusters, administrators can use SCCs to control permissions for pods. To allow installing Connect Agent in your cluster, you need to create a custom SCC.

The following sample SCC definition specifies the set of conditions that Connect Agent must run with in order to be accepted into the cluster:

# Connect Agent SCC
apiVersion: v1
kind: SecurityContextConstraints
metadata:
  name: gke-connect-scc
allowPrivilegeEscalation: false
# This is redundant with non-root + disallow privilege escalation,
# but we provide it for defense in depth.
requiredDropCapabilities:
- ALL
runAsUser:
  type: MustRunAsNonRoot
seLinuxContext:
  type: RunAsAny
supplementalGroups:
  type: MustRunAs
  ranges:
  - min: 1
    max: 65535
fsGroup:
  type: MustRunAs
  ranges:
  - min: 1
    max: 65535
volumes:
- secret
- projected
readOnlyRootFilesystem: true
seccompProfiles:
- docker/default
users:
groups:
  # Grants all service accounts in the gke-connect namespace access to this SCC
  - system:serviceaccounts:gke-connect

Assuming that you've saved your SCC definition as gke-connect-scc.yaml, use the OpenShift oc command line tool to create the gke-connect-scc SCC for your cluster, as follows:

$ oc create -f gke-connect-scc.yaml

To verify that the custom SCC has been created, run the following oc command:

$ oc get scc | grep gke-connect-scc

Fleet Workload Identity requirements

You can register attached clusters with fleet Workload Identity enabled if your platform creates a public OIDC endpoint for your cluster (or allows you to create one), or if you have Kubernetes service account issuer discovery enabled for the cluster. If you can't meet these requirements, you must register attached clusters with a Google Cloud service account for authentication.

For specific cluster types, see the following:

  • OpenShift clusters: Can be registered with fleet Workload Identity enabled after you have configured your custom SCC, as described above.
  • kind clusters: Require service account issuer discovery to be enabled to use fleet Workload Identity. This is enabled by default from Kubernetes version 1.20. If you need to enable this feature, follow the instructions in Service account token volume projection. Service account issuer discovery is enabled automatically when service account token volume project is enabled.
  • EKS clusters: Require the cluster to have a public IAM OIDC Identity Provider. Follow the instructions in Create an IAM OIDC provider for your cluster to check if a provider exists, and create a provider if necessary.

Resource usage and requirements

Typically the Connect agent installed at registration uses 500m of CPU and 200Mi of memory. However, this usage can vary depending on the number of requests being made to the agent per second, and the size of those requests. These can be affected by a number of factors, including the size of the cluster, the number of users accessing the cluster via the Cloud Console (the more users and/or workloads, the more requests), and the number of fleet-enabled features on the cluster.

What's next?