Set up the Connect gateway with Google Groups
This guide is for platform administrators who need to set up the Connect gateway for use by their project's user accounts, using Google Groups for authorization. Before reading this guide, you should be familiar with the concepts in our overview. To authorize individual accounts, see the default setup.
This setup lets users log in to configured fleet clusters using the Google Cloud CLI, the Connect gateway, and the Google Cloud console.
This feature uses Google Groups associated with Google Workspace or any edition of Cloud Identity.
Supported cluster types
If you're using GKE clusters on Google Cloud with the Connect Gateway, you do not need to follow this entire setup with GKE Identity Service to use Google Groups for authorization. Instead, follow the instructions in Configure Google Groups for RBAC, which also allows users to log in to GKE clusters from the Google Cloud console using Google Groups for access control. Once you've done this, follow the instructions below in Grant IAM roles to Google Groups to let group members access clusters through the Connect gateway.
You can set up access control with Google Groups through the Connect gateway for the following cluster types:
- Registered GKE clusters
- Clusters in Google Distributed Cloud (on-premises) deployments on VMware and on bare metal from Anthos (GKE Enterprise) version 1.13 onwards
- GKE on AWS and GKE on Azure from Kubernetes version 1.25 onwards.
Attached clusters from versions 1.26.0-gke.8, 1.27.0-gke.5, 1.28.0-gke.2 or later.
If you need to upgrade on-premises clusters to use this feature, see Upgrading clusters on VMWare and Upgrading clusters on bare metal.
To use this feature with environments other than those listed above, please contact Cloud Customer Care or the Connect gateway team.
How it works
As described in the overview, it's often useful to be able to give users access to clusters on the basis of their membership of Google Groups, that is, groups created in the Google Workspace. Authorizing based on group membership means you don't have to set up separate authorization for each account, making policies simpler to manage and easier to audit. So, for example, you can easily share cluster access to a team, removing the need to manually add/remove individual users from clusters when they join or leave the team. With some additional setup using GKE Identity Service, you can configure the Connect gateway to get Google Group membership information for each user that logs into the cluster. You can then use this groups information in your access control policies.
The following shows the typical flow for a user authenticating to and running commands against a cluster with this service enabled. For this flow to be successful, an RBAC policy must exist on the cluster for a group that:
Contains the user
alice@example.com
as a member.Is a nested group of
gke-security-groups@example.com
.
- The user
alice@example.com
logs in using their Google identity and, if they plan to use the cluster from the command line, gets the cluster's gatewaykubeconfig
as described in Using the Connect gateway. - The user sends a request by running a
kubectl
command or opening the Google Kubernetes Engine Workloads or Object Browser pages in the Google Cloud console. - The request is received by the Connect service, which performs an authorization check with IAM.
- The Connect service forwards the request to the Connect Agent running on the cluster. The request is accompanied with the user's credential information for use in authentication and authorization on the cluster.
- The Connect Agent forwards the request to the Kubernetes API server.
- The Kubernetes API server forwards the request to GKE Identity Service, which validates the request.
- GKE Identity Service returns the user and group information to the Kubernetes API server. The Kubernetes API server can then use this information to authorize the request based on the cluster's configured RBAC policies.
Before you begin
Ensure that you have the following command line tools installed:
- The latest version of the Google Cloud CLI the command line tool for interacting with Google Cloud.
- The Kubernetes command line tool,
kubectl
, for interacting with your clusters.
If you are using Cloud Shell as your shell environment for interacting with Google Cloud, these tools are installed for you.
Ensure that you have initialized the gcloud CLI for use with your project.
This guide assumes that you have
roles/owner
in your project. If you are not a project owner, you may need additional permissions to perform some of the setup steps.For clusters outside Google Cloud, GKE Identity Service needs to call the Google Identity API from your cluster. Check if your network policy requires outbound traffic to go through a proxy.
Set up users and groups
Ensure that the groups that you want to use with this feature are set up as follows:
- Ensure there is a group in your organization's Google Workspace with the format
gke-security-groups@YOUR-DOMAIN
. If you don't have such a group, follow the instructions in Create a group in your organization to create the group using your Google Workspace Admin Console. - Follow the instructions in Add a group to another group to add the groups you want to use for access control as nested groups of
gke-security-groups
. Do not add individual users as members ofgke-security-groups
.
User accounts that you want to use with this feature should use the same domain name as that of their group.
Enable APIs
To add the gateway to your project, enable the Connect gateway API and its required dependency APIs. If your users only want to authenticate to clusters using the Google Cloud console, you don't need to enable connectgateway.googleapis.com
, but do need to enable the remaining APIs.
PROJECT_ID=example_project
gcloud services enable --project=${PROJECT_ID} \
connectgateway.googleapis.com \
anthos.googleapis.com \
gkeconnect.googleapis.com \
gkehub.googleapis.com \
cloudresourcemanager.googleapis.com
Set up GKE Identity Service
Connect gateway's Google Groups support feature uses GKE Identity Service to get group membership information from Google. You can find out more about GKE Identity Service in Introducing GKE Identity Service.
If you're using GKE clusters with the gateway, you do not need to set up GKE Identity Service to use Google Groups support. Instead follow the instructions in Configure Google Groups for RBAC, and continue to Grant IAM roles to grant access to clusters through the gateway.
If you are using GKE attached clusters with the gateway, GKE Identity Service is not required for Google Groups support. Follow the instructions for your chosen cluster type to set up Google Groups support:
- Connect to your EKS attached cluster
- Connect to your AKS attached cluster
- Connect to your other cluster types
Ensure GKE Identity Service is installed
GKE Identity Service is installed by default on GKE clusters from version 1.7 onwards (though Google Groups support requires version 1.13 or higher). You can confirm that it is installed correctly on your cluster by running the following command:
kubectl --kubeconfig CLUSTER_KUBECONFIG get all -n anthos-identity-service
Replace CLUSTER_KUBECONFIG
with the path to the cluster's kubeconfig.
Configure Google Groups support
If you're using GKE on AWS or GKE on Azure, your cluster is automatically configured to support Google Groups, and you can skip to Grant IAM roles to Google Groups.
If you're using Google Distributed Cloud on VMware or bare metal, the way you set up GKE Identity Service determines how you need to configure the Google Groups feature.
If you're using GKE Identity Service for the first time, you can choose between configuring Google Groups at Fleet level (recommended) or Per-cluster setup.
If you're not a first time user of GKE Identity Service, keep in mind one of the following:
- If you have already set up GKE Identity Service for another identity provider at fleet level, the Google Groups feature is enabled for you by default. See the Fleet section below for more details and any additional setup you may require.
If you have already set up GKE Identity Service for another identity provider on a per-cluster basis, see the Per-cluster section below for instructions to update your configuration for the Google Groups feature.
Fleet
You can use the Google Cloud console or command line to configure access to Google groups at fleet level.
If you have already configured GKE Identity Service at fleet-level with another identity provider (such as Microsoft AD FS or Okta), the Connect gateway Google Groups feature is already enabled for you by default on configured clusters, provided that the Google identity provider is reachable without the need to use a proxy.
Console
If you have not previously set up GKE Identity Service for a fleet, follow the instructions in Configure clusters for GKE Identity Service.
Select clusters and update configuration
- In the Google Cloud console, go to the Feature Manager page.
- Click Details in the Identity Service panel. Your project's cluster details are displayed.
- Click Update identity service to open the setup pane.
- Select the clusters you want to configure. You can choose individual clusters, or specify that you want all clusters to be configured with the same identity configuration.
- In the Configure Identity Providers section, you can choose to retain, add, update, or remove an identity provider.
- Click Continue to go to the next configuration step. If you've selected at least one eligible cluster for this setup, the Google Authentication section is displayed.
- Select Enable to enable Google authentication for the selected clusters. If you need to access the Google identity provider through a proxy, enter the Proxy details.
- Click Update Configuration. This applies the identity configuration on your selected clusters.
gcloud
If you have not previously set up GKE Identity Service for a fleet,
follow the instructions in Configure clusters for GKE Identity Service.
Specify only the following configuration in your auth-config.yaml
file:
spec:
authentication:
- name: google-authentication-method
google:
disable: false
Configuring Google Groups access using a proxy
If you need to access the Google identity provider through a proxy, use a proxy
field in your auth-config.yaml
file. You might need to set this if, for example, your cluster is in a private network and needs to connect to a public identity provider.
You must add this configuration even if you have already configured GKE Identity Service for another provider.
To configure the proxy
, here's how you can update the authentication
section
of the existing configuration file auth-config.yaml
.
spec:
authentication:
- name: google-authentication-method
google:
disable: false
proxy: PROXY_URL
where
disable
(optional) denotes if you want to opt in or out of the Google Groups feature for clusters. This value is set to false by default. If you'd like to opt out of this feature, you can set it to true.PROXY_URL
(optional) is the proxy server address to connect to the Google identity. For example:http://user:password@10.10.10.10:8888
Apply the configuration
To apply the configuration to a cluster, run the following command:
gcloud container fleet identity-service apply \ --membership=CLUSTER_NAME \ --config=/path/to/auth-config.yaml
where
CLUSTER_NAME
is your cluster's unique membership name within the fleet.
Once applied, this configuration is managed by the GKE Identity Service controller. Any local changes made to GKE Identity Service client configuration is reconciled back by the controller to the configuration specified in this setup.
Per cluster
To configure your cluster to use GKE Identity Service with the Google Groups feature,
you need to update the cluster's GKE Identity Service ClientConfig
.
This is a Kubernetes custom resource type (CRD) used for cluster configuration.
Each GKE Enterprise cluster has a ClientConfig
resource named default
in the
kube-public
namespace that you update with your configuration details.
To edit the configuration, use the following command.
kubectl --kubeconfig USER_CLUSTER_KUBECONFIG -n kube-public edit clientconfig default
where
USER_CLUSTER_KUBECONFIG
is the path to the kubeconfig file of your cluster.
If there are multiple contexts in the kubeconfig, the current context is used. You might need to reset the current context to the correct cluster before running the command.
Here's an example of how you can update the ClientConfig
with a new
authentication method having a configuration of type google
to enable Google Groups feature.
If the internalServer
field is empty, make sure it's set to https://kubernetes.default.svc
,
as shown below.
spec:
authentication:
- google:
audiences:
- "CLUSTER_IDENTIFIER"
name: google-authentication-method
proxy: PROXY_URL
internalServer: https://kubernetes.default.svc
where
CLUSTER_IDENTIFIER
(required) denotes the membership details of your cluster.
You can retrieve your cluster's membership details using the command:
kubectl --kubeconfig USER_CLUSTER_KUBECONFIG get memberships membership -o yaml
where
USER_CLUSTER_KUBECONFIG
is the path to the
kubeconfig file for the cluster.
In the response, refer to the spec.owner.id
field to retrieve the cluster's membership details.
Here's an example response showing a cluster's membership details:
id: //gkehub.googleapis.com/projects/123456789/locations/global/memberships/xy-ab12cd34ef
which corresponds to the following format:
//gkehub.googleapis.com/projects/PROJECT_NUMBER/locations/global/memberships/MEMBERSHIP
Grant IAM roles to Google Groups
Groups need the following additional Google Cloud roles to interact with connected clusters through the gateway:
roles/gkehub.gatewayAdmin
. This role allows group members to access the Connect gateway API.- If group members only need read-only access to connected clusters,
roles/gkehub.gatewayReader
can be used instead. - If group members need read/write access to connected clusters,
roles/gkehub.gatewayEditor
can be used instead.
- If group members only need read-only access to connected clusters,
roles/gkehub.viewer
. This role allows group members to view registered cluster memberships.
You grant these roles using the gcloud projects add-iam-policy-binding
command, as follows:
gcloud projects add-iam-policy-binding --member=group:GROUP_NAME@DOMAIN --role=GATEWAY_ROLE PROJECT_ID gcloud projects add-iam-policy-binding --member=group:GROUP_NAME@DOMAIN --role=roles/gkehub.viewer PROJECT_ID
where
- GROUP_NAME is the Google Group you want to grant the role
- DOMAIN is your Google Workspace domain
- GROUP_NAME@DOMAIN is a nested group under gke-security-groups@DOMAIN
- GATEWAY_ROLE is one of
roles/gkehub.gatewayAdmin
,roles/gkehub.gatewayReader
orgkehub.gatewayEditor
. - PROJECT_ID is your project
You can find out more about granting IAM permissions and roles in Granting, changing, and revoking access to resources.
Configure role-based access control (RBAC) policies
Finally, each cluster's Kubernetes API server needs to be able to authorize kubectl
commands that come through the gateway from your specified groups. For each cluster, you need to add an RBAC permissions policy that specifies which permissions the group has on the cluster.
In the following example, you'll see how to grant members of the cluster-admin-team
group cluster-admin
permissions on the cluster, save the policy file as /tmp/admin-permission.yaml and apply it to the cluster associated with the current context. Be sure to also include the cluster-admin-team
group under the gke-security-groups
group.
cat <<EOF > /tmp/admin-permission.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: gateway-cluster-admin-group
subjects:
- kind: Group
name: cluster-admin-team@example.com
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
EOF
# Apply permission policy to the cluster.
kubectl apply --kubeconfig=KUBECONFIG_PATH -f /tmp/admin-permission.yaml
You can find out more about specifying RBAC permissions in Using RBAC authorization.
What's next?
- Learn how to use the Connect gateway to connect to clusters from the command line.
- See an example of how to use the Connect gateway as part of your DevOps automation in our Integrating with Cloud Build tutorial.