Setting up the Connect gateway

This guide is for platform administrators who need to set up the Connect gateway for use by their project's users and service accounts. Before reading this guide, you should be familiar with the concepts in our overview.

This setup allows users to both use the Connect gateway directly, and to connect to registered clusters outside Google Cloud with their Google Cloud identity in the Cloud Console, as described in Logging in to a cluster from the Cloud Console.

Before you begin

  • Ensure that you have the following command line tools installed:

    • The latest version of the Cloud SDK, which includes gcloud, the command line tool for interacting with Google Cloud.
    • kubectl

    If you are using Cloud Shell as your shell environment for interacting with Google Cloud, these tools are installed for you.

  • Ensure that you have initialized the gcloud command line tool for use with your project.

  • This guide assumes that you have roles/owner in your project. If you are not a project owner, you may need additional permissions to perform some of the setup steps.

Enable APIs

To add the gateway to your project, enable the Connect gateway API and its required dependency APIs. If your users only want to authenticate to clusters using the Cloud Console, you don't need to enable connectgateway.googleapis.com, but do need to enable the remaining APIs.

PROJECT_ID=example_project
gcloud services enable --project=${PROJECT_ID}  \
connectgateway.googleapis.com \
anthos.googleapis.com \
gkeconnect.googleapis.com \
gkehub.googleapis.com \
cloudresourcemanager.googleapis.com

Verify registered clusters

Only clusters registered to your project environ can be accessed through the Connect gateway. Anthos clusters on premises and on AWS are automatically registered when they are created. However, GKE clusters on Google Cloud and attached clusters must be registered separately. If you need to register a cluster, follow the instructions in Registering a cluster.

To verify that clusters have been registered, run the following command:

gcloud container hub memberships list

You should see a list of all your registered clusters, as in this example output:

NAME         EXTERNAL_ID
cluster-1    0192893d-ee0d-11e9-9c03-42010a8001c1
cluster-2    f0e2ea35-ee0c-11e9-be79-42010a8400c2

Grant IAM roles to users

Users and service accounts need the following additional Google Cloud roles to interact with connected clusters through the gateway, unless the user or account has roles/owner in the project:

  • roles/gkehub.gatewayAdmin. This role allows a user to access the Connect gateway API.
  • roles/gkehub.viewer. This role allows a user to retrieve cluster credentials.

You grant these roles using the gcloud projects add-iam-policy-binding command, as follows:


# [PROJECT_ID] is the project's unique identifier.
# [USER_ACCOUNT] is an email address, either USER_EMAIL_ADDRESS or GCPSA_EMAIL_ADDRESS
# [USER_EMAIL_ADDRESS] is the Google Cloud account used to interact with clusters via the CGW API.
# [GCPSA_EMAIL_ADDRESS] is the identity used for interacting with the CGW API and cluster.

PROJECT_ID=example_project

# MEMBER should be of the form `user|serviceAccount:$USER_ACCOUNT`, for example:
# MEMBER=user:foo@example.com
# MEMBER=serviceAccount:test@example-project.iam.gserviceaccount.com

MEMBER=user:foo@example.com

gcloud projects add-iam-policy-binding ${PROJECT_ID} \
--member ${MEMBER} \
--role roles/gkehub.gatewayAdmin
gcloud projects add-iam-policy-binding ${PROJECT_ID} \
--member ${MEMBER} \
--role roles/gkehub.viewer

You can find out more about granting IAM permissions and roles in Granting, changing, and revoking access to resources.

Grant roles for access through the Cloud Console

For users who only want to interact with connected clusters using the Cloud Console, the following roles are required:

  • roles/gkehub.viewer. This role allows the user to view the clusters in the GKE console page.
  • roles/container.viewer. This role allows the user to view container resources in the Cloud Console.

Configure role-based access control (RBAC) policies

Finally, each cluster's Kubernetes API server needs to be able to authorize kubectl commands that come through the gateway from your specified users and service accounts. To ensure this, you need to update the RBAC policies on each cluster that you want to make accessible through the gateway. There are two policies that you need to update or add:

  • An impersonation policy that authorizes the Connect agent to send requests to the Kubernetes API server on behalf of a user. The following example shows how to create and apply such a policy for a user, saving the policy file as /tmp/impersonate.yaml and applying it to the cluster associated with the current context:
# [USER_ACCOUNT] is an email, either USER_EMAIL_ADDRESS or GCPSA_EMAIL_ADDRESS
USER_ACCOUNT=foo@example.com
cat <<EOF > /tmp/impersonate.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: gateway-impersonate
rules:
- apiGroups:
  - ""
  resourceNames:
  - ${USER_ACCOUNT}
  resources:
  - users
  verbs:
  - impersonate
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: gateway-impersonate
roleRef:
  kind: ClusterRole
  name: gateway-impersonate
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: connect-agent-sa
  namespace: gke-connect
EOF
# Apply impersonation policy to the cluster.
kubectl apply -f /tmp/impersonate.yaml

Note this doesn't ensure the request itself will be authorized by the API server - to do this you also need to explicitly give the particular user proper RBAC permissions to perform Kubernetes operations, as described below.

  • A permissions policy that specifies which permissions the user has on the cluster. The following example shows granting the user cluster-admin permissions on the cluster, saving the policy file as /tmp/admin-permission.yaml and applying it to the cluster associated with the current context.
cat <<EOF > /tmp/admin-permission.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: gateway-cluster-admin
subjects:
- kind: User
  name: ${USER_ACCOUNT}
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
EOF
# Apply permission policy to the cluster.
kubectl apply -f /tmp/admin-permission.yaml

You can find out more about specifying RBAC permissions in Using RBAC authorization.

VPC Service Controls support

VPC Service Controls provides an additional layer of security defense for Google Cloud services that is independent of Identity and Access Management (IAM). While IAM enables granular identity-based access control, VPC Service Controls enables broader context-based perimeter security, including controlling data egress across the perimeter—for example, you can specify that only certain projects can access your BigQuery data. You can find more about how VPC Service Controls works to protect your data in the VPC Service Controls Overview.

You can use VPC Service Controls with the Connect gateway for extra data security, once you ensure that the necessary APIs to use the gateway can be accessed from within your specified service perimeter.

What's next?