This page describes the prerequisites and requirements for registering a Kubernetes cluster with Google Cloud, including network, Google Cloud and Kubernetes cluster configuration, as well as resource requirements for the Connect agent.
Ensure network connectivity
To successfully register your cluster, you need to ensure that the domains below are reachable from your Kubernetes cluster:
cloudresourcemanager.googleapis.com
resolves metadata regarding the Google Cloud project the cluster is being connected to.oauth2.googleapis.com
to obtain short-lived OAuth tokens for agent operations againstgkeconnect.googleapis.com
.gkeconnect.googleapis.com
to establish the channel used to receive requests from Google Cloud and issues responses.gkehub.googleapis.com
to create Google Cloud-side Hub membership resources that corresponds to the cluster you're connecting with Google Cloud.www.googleapis.com
to authenticate service tokens from incoming Google Cloud service requests.gcr.io
to pull GKE Connect Agent image.
If you're using a proxy for Connect, you must also update the proxy's allowlist with these domains.
If you use gcloud to register your Kubernetes cluster, these domains also need to be reachable in the environment where you run the gcloud commands.
Using VPC Service Controls
If you want to use VPC Service Controls for additional data security in your application, you need to ensure that the following services are in your service perimeter:
- Resource Manager API (
cloudresourcemanager.googleapis.com
) - GKE Connect API (
gkeconnect.googleapis.com
) - GKE Hub API (
gkehub.googleapis.com
)
You also need to set up private connectivity for access to the relevant APIs. You can find out how to do this in Setting up private connectivity.
Google Cloud setup and configuration
Install the Cloud SDK
Install the Cloud SDK, which includes gcloud
, the
command line interface (CLI) to Google Cloud.
Authorize gcloud
to access Google Cloud
After you install Cloud SDK, run the following command to log in to Google Cloud:
gcloud auth login
Install gcloud beta
(optional)
Install the gcloud beta
component, if you plan to try alpha or beta
Connect features:
gcloud components install beta
Grant the required IAM roles to the user registering the cluster
Having roles/owner in your project gives you complete control and allows you to complete the registration tasks.
If you do not have roles/owner in your project, you need to be granted specific IAM roles before you can connect clusters to Google. See Connect IAM roles.
The following IAM roles ensure that you are able to register and connect to clusters:
- roles/gkehub.admin
- roles/iam.serviceAccountAdmin
- roles/iam.serviceAccountKeyAdmin
roles/resourcemanager.projectIamAdmin
gcloud
gcloud projects add-iam-policy-binding [HUB_PROJECT_ID] \ --member user:[GCP_EMAIL_ADDRESS] \ --role=roles/gkehub.admin \ --role=roles/iam.serviceAccountAdmin \ --role=roles/iam.serviceAccountKeyAdmin \ --role=roles/resourcemanager.projectIamAdmin
where:
- [HUB_PROJECT_ID] is the Google Cloud project ID in which you want to register clusters. Learn how to find this value.
- [GCP_EMAIL_ADDRESS] is the account registration users use to log in to Google Cloud.
To learn more about how to grant IAM roles, refer to Granting, Changing, and Revoking Access to Resources in the IAM documentation.
Enable the required APIs in your project
You need to enable the following APIs in your Google Cloud project:
- container.googleapis.com
- gkeconnect.googleapis.com
- gkehub.googleapis.com
- cloudresourcemanager.googleapis.com
Pods in your cluster must be able to reach googleapis.com
and
gkeconnect.googleapis.com
addresses, either directly or by using a configured
proxy server.
Non-project owners must be granted the serviceusage.services.enable
permission
before they can enable APIs.
To enable these APIs, run the following command:
gcloud
gcloud services enable \ --project=[HUB_PROJECT_ID] \ container.googleapis.com \ gkeconnect.googleapis.com \ gkehub.googleapis.com \ cloudresourcemanager.googleapis.com
where:
- [HUB_PROJECT_ID] is the Google Cloud project ID in which you want to register clusters. Learn how to find this value.
Grant read-only roles to other users
Authenticated users with the following roles are able to view registered user clusters from Cloud Console. These roles provide read-only access:
- roles/gkehub.viewer
- roles/container.viewer
For example, to grant user in project the roles/gkehub.viewer and roles/container.viewer roles, you'd run the following command:
gcloud
gcloud projects add-iam-policy-binding [HUB_PROJECT_ID] \ --member user:[USER_EMAIL_ADDRESS] \ --role roles/gkehub.viewer \ --role=roles/container.viewer
where:
- [HUB_PROJECT_ID] is the Google Cloud project ID in which you want to register clusters. Learn how to find this value.
- [USER_EMAIL_ADDRESS] is email address of an authenticated users.
Create a Google Cloud service account using gcloud
A JSON file containing Google Cloud Service Account credentials is required to manually register a cluster. To follow the principle of least privilege, we recommend that you create a distinct service account for each Kubernetes cluster that you register, and only bind IAM roles to it for the corresponding cluster.
To create this file, perform the following steps:
gcloud
Create a service account by running the following command:
gcloud iam service-accounts create [SERVICE_ACCOUNT_NAME] --project=[HUB_PROJECT_ID]
List all of a project's service accounts by running the following command:
gcloud iam service-accounts list --project=[HUB_PROJECT_ID]
If you are creating a distinct service account for each Kubernetes cluster that you register, bind the gkehub.connect IAM role to the service account for its corresponding cluster with an IAM Condition on the cluster's membership name:
MEMBERSHIP_NAME=[MEMBERSHIP_NAME] HUB_PROJECT_ID=[HUB_PROJECT_ID] SERVICE_ACCOUNT_NAME=[SERVICE_ACCOUNT_NAME] gcloud projects add-iam-policy-binding ${HUB_PROJECT_ID} \ --member="serviceAccount:${SERVICE_ACCOUNT_NAME}@${HUB_PROJECT_ID}.iam.gserviceaccount.com" \ --role="roles/gkehub.connect" \ --condition "expression=resource.name == \ 'projects/${HUB_PROJECT_ID}/locations/global/memberships/${MEMBERSHIP_NAME}',\ title=bind-${SERVICE_ACCOUNT_NAME}-to-${MEMBERSHIP_NAME}"
Otherwise, bind the role to the service account for all clusters in the project without the condition.
HUB_PROJECT_ID=[HUB_PROJECT_ID] gcloud projects add-iam-policy-binding ${HUB_PROJECT_ID} \ --member="serviceAccount:[SERVICE_ACCOUNT_NAME]@${HUB_PROJECT_ID}.iam.gserviceaccount.com" \ --role="roles/gkehub.connect"
Download the service account's private key JSON file. You use this file when you Register a cluster:
HUB_PROJECT_ID=[HUB_PROJECT_ID] gcloud iam service-accounts keys create [LOCAL_KEY_PATH] \ --iam-account=[SERVICE_ACCOUNT_NAME]@${HUB_PROJECT_ID}.iam.gserviceaccount.com \ --project=${HUB_PROJECT_ID}
where:
- [HUB_PROJECT_ID] is the Google Cloud project ID in which you want to register clusters. Learn how to find this value.
- [SERVICE_ACCOUNT_NAME] is the display name that you choose for the Service Account.
- [MEMBERSHIP_NAME] is the membership name that you choose to uniquely represent the cluster while registering it.
- [LOCAL_KEY_PATH] is a local filepath where you'd like to
save the service account's private key, a JSON file. We recommend that
you name the file using the service account name and your project ID,
such as
/tmp/creds/[SERVICE_ACCOUNT_NAME]-[HUB_PROJECT_ID].json
.
Registering a GKE cluster into a different project
The first time you register a GKE cluster from its project (GKE_PROJECT) to a different project (HUB_PROJECT), you must first grant the necessary permissions. The HUB_PROJECT default service account gcp-sa-gkehub
requires the Hub Service Agent role in the GKE_PROJECT project. The Hub Service Agent is an IAM role
that grants the service account the permissions to manage cluster resources.
You can confirm the gcp-sa-gkehub
has the required role
using gcloud
tool or the Cloud Console. If the command or the
dashboard do not display gcp-sa-gkehub
then that means the required role is
missing. If you see gcp-sa-gkehub
, it should have the form
service-[HUB-PROJECT-NUMBER]@gcp-sa-gkehub.iam.gserviceaccount.com
.
gcloud
Run the following command:
gcloud projects get-iam-policy [GKE_PROJECT_ID]
To grant gcp-sa-gkehub
Hub Service Agent role, you need to first
ensure that the Hub default Service Account exists. If you registered clusters
in this project before, then this Service Account should exist already.
To create the gcp-sa-gkehub
, run the following command:
gcloud beta services identity create --service=gkehub.googleapis.com --project=[HUB_PROJECT_ID]
This command should output the following:
Service identity created: service-[HUB_PROJECT_NUMBER]@gcp-sa-gkehub.iam.gserviceaccount.com
Once gcp-sa-gkehub
service account exists, then run the following command to grant it the roles/gkehub.serviceAgent
role:
GKE_PROJECT_ID=[GKE_PROJECT_ID] HUB_PROJECT_ID=[HUB_PROJECT_ID] HUB_PROJECT_NUMBER=$(gcloud projects describe "${HUB_PROJECT_ID}" --format "value(projectNumber)") gcloud projects add-iam-policy-binding "${HUB_PROJECT_ID}" \ --member "serviceAccount:service-${HUB_PROJECT_NUMBER}@gcp-sa-gkehub.iam.gserviceaccount.com" \ --role roles/gkehub.serviceAgent gcloud projects add-iam-policy-binding "${GKE_PROJECT_ID}" \ --member "serviceAccount:service-${HUB_PROJECT_NUMBER}@gcp-sa-gkehub.iam.gserviceaccount.com" \ --role roles/gkehub.serviceAgent
where:
- [GKE_PROJECT_ID] is the Google Cloud project ID of the GKE cluster.
- [HUB_PROJECT_ID] is the Google Cloud project ID in which you want to register clusters. Learn how to find this value.
To confirm that the role binding is granted:
gcloud projects get-iam-policy [GKE_PROJECT_ID]
If you see the service account name along with the gkehub.serviceAgent
role,
the role binding has been granted. For example:
- members:
- serviceAccount:service-[HUB_PROJECT_NUMBER]@gcp-sa-gkehub.iam.gserviceaccount.com
role: roles/gkehub.serviceAgent
Kubernetes setup and configuration
Install kubectl
We recommend installing kubectl with Cloud SDK.
To check the version of kubectl:
kubectl version
The client version is indicated by "gitVersion" of the output.
To install kubectl:
gcloud components install kubectl
Grant the cluster-admin RBAC role to the user registering the cluster
The cluster-admin role-based access control (RBAC) ClusterRole grants you the cluster permissions necessary to connect your clusters back to Google.
If you created the cluster, you likely have this role. You can verify by running the following command:
kubectl auth can-i '*' '*' --all-namespaces
If you or another user needs the role, create a ClusterRoleBinding resource in the cluster:
kubectl create clusterrolebinding [BINDING_NAME] --clusterrole cluster-admin --user [USER]
where:
- [BINDING_NAME] is a name that you choose for the ClusterRoleBinding resource.
- [USER] is the identity used to authenticate against the cluster.
For more information about the cluster-admin role, refer to the Kubernetes documentation.
Using admission controls
As part of registering your cluster with Google Cloud, the Connect Agent is installed in your cluster. Depending on the container platform and the admission controller, you may need to create admission policies that allow the installation of the agent.
Pod Security Policies (PSP)
A PodSecurityPolicy is an optional admission controller resource that validates requests to create and update Pods on your cluster. Pod Security Policies are enforced only when the PSP admission controller plugin is enabled.
If your cluster uses the PSP admission controller plugin, you don't need to create any additional policies to use the Connect Agent, as Connect Agent-specific policy and RBAC roles are auto-installed as part of cluster registration.
You can use the following commands to checkout the installed PSP policy and RBACs:
$ kubectl get psp | grep connect $ kubectl get role,rolebindings -n gke-connect | grep psp
Security Context Constraints (SCC)
On OpenShift OKE and OKD clusters, administrators can use SCCs to control permissions for pods. To allow installing Connect Agent in your cluster, you need to create a custom SCC.
The following sample SCC definition specifies the set of conditions that Connect Agent must run with in order to be accepted into the cluster:
# Connect Agent SCC apiVersion: v1 kind: SecurityContextConstraints metadata: name: gke-connect-scc allowPrivilegeEscalation: false # This is redundant with non-root + disallow privilege escalation, # but we provide it for defense in depth. requiredDropCapabilities: - ALL runAsUser: type: MustRunAsNonRoot seLinuxContext: type: RunAsAny supplementalGroups: type: MustRunAs ranges: - min: 1 max: 65535 fsGroup: type: MustRunAs ranges: - min: 1 max: 65535 volumes: - secret readOnlyRootFilesystem: true seccompProfiles: - docker/default users: groups: # Grants all service accounts in the gke-connect namespace access to this SCC - system:serviceaccounts:gke-connect
Assuming that you've saved your SCC definition as gke-connect-scc.yaml
, use the OpenShift oc
command line tool to create the gke-connect-scc
SCC for your cluster, as follows:
$ oc create -f gke-connect-scc.yaml
To verify that the custom SCC has been created, run the following oc
command:
$ oc get scc | grep gke-connect-scc
Resource usage and requirements
Typically the Connect agent installed at registration uses 500m of CPU and 200Mi of memory. However, this usage can vary depending on the number of requests being made to the agent per second, and the size of those requests. These can be affected by a number of factors, including the size of the cluster, the number of users accessing the cluster via the Cloud Console (the more users and/or workloads, the more requests), and the number of environ-enabled features on the cluster.