Prepare to set up with the GKE Gateway API
The configuration described in this document is supported for Preview customers, but we do not recommended it for new Cloud Service Mesh users. For more information, see the Cloud Service Mesh overview.
This guide tells you how to prepare the environment for using the Google Kubernetes Engine Gateway API with Cloud Service Mesh. At a high level, you need to perform the following steps:
- Enable the required Google Cloud API services.
- Deploy a GKE cluster.
- Configure IAM permissions.
- Install the required custom resource definitions (CRDs).
- Register the cluster to a fleet.
- [Optional] Enable multi-cluster Services (Multi-Cluster Service Discovery).
- Enable the service mesh.
If you are not using GKE, use the
service routing APIs
and create a Mesh
resource.
Before you begin
Make sure that the components of your deployment meet these requirements:
- GKE must be version 1.20 or later.
- Only data planes with the
xDS version 3 API
and later are supported.
- Minimum Envoy version of 1.20.0
- Minimum gRPC bootstrap generator version of v0.14.0
- GKE clusters must be in VPC-native (Alias IP) mode.
- Self-managed Kubernetes clusters on Compute Engine, as opposed to GKE are not supported.
- Any additional restrictions listed for the Gateway functionality on GKE apply to the Cloud Service Mesh integration with the GKE Gateway API.
- The service account for your GKE nodes and Pods must have permission to access the Traffic Director API. For more information on the required permissions, see Enabling the service account to access the Traffic Director API.
- Per-project resource usage and backend service quota limitations apply.
Enable the required Google Cloud API services
Run the following command to enable the required APIs, if they are not already enabled in your project:
gcloud services enable --project=PROJECT_ID \ container.googleapis.com \ gkehub.googleapis.com \ multiclusteringress.googleapis.com \ trafficdirector.googleapis.com \ networkservices.googleapis.com
If you plan to include more than one cluster in your fleet, enable the
multiclusterservicediscovery
API:gcloud services enable --project=PROJECT_ID \ multiclusterservicediscovery.googleapis.com
Deploy a GKE cluster
Use these instructions to deploy a GKE cluster.
Create a GKE cluster called
gke-1
in theus-west1-a
zone:gcloud container clusters create gke-1 \ --zone=us-west1-a \ --enable-ip-alias \ --workload-pool=PROJECT_ID.svc.id.goog \ --scopes=https://www.googleapis.com/auth/cloud-platform \ --enable-mesh-certificates \ --release-channel=regular \ --project=PROJECT_ID
--enable-ip-alias
: This flag creates a VPC-native cluster and makes the Pods' IP addresses routable within the VPC network.--workload-pool
: This flag lets your cluster participate in the project's workload identity pool.--scopes
: This flag specifies the OAuth scopes assigned to the cluster nodes.--release-channel
: This flag designates theregular
channel.--enable-mesh-certificates
: This flag enables Cloud Service Mesh's auto mTLS feature if it potentially becomes available in the future.
Get the cluster credentials:
gcloud container clusters get-credentials gke-1 --zone=us-west1-a
Rename the cluster context:
kubectl config rename-context gke_PROJECT_ID_us-west1-a_gke-1 gke-1
Configure the IAM permissions for the data plane
For this demonstration deployment, you grant the Cloud Service Mesh client role
roles/trafficdirector.client
to all authenticated users, including all service
accounts, in the GKE cluster. This IAM role is
required to authorize Cloud Service Mesh clients in the data plane, such as
Envoys, to receive configuration from Cloud Service Mesh.
If you do not want to grant the client role to all authenticated users and
prefer to restrict the role to service accounts, see the GKE
workload identity guide
to set up a specialized Kubernetes service account with the role
roles/trafficdirector.client
for your services.
Grant the
client
role to the service accounts:gcloud projects add-iam-policy-binding PROJECT_ID \ --member "group:PROJECT_ID.svc.id.goog:/allAuthenticatedUsers/" \ --role "roles/trafficdirector.client"
Install the required custom resource definitions
Install the custom resource definitions (CRDs) required for using the Gateway API with Cloud Service Mesh:
kubectl apply -k "github.com/kubernetes-sigs/gateway-api/config/crd/experimental?ref=v0.6.0"
kubectl kustomize "https://github.com/GoogleCloudPlatform/gke-networking-recipes.git/gateway-api/config/mesh/crd" \ | kubectl apply -f -
Verify that required CRDs are installed automatically in the cluster by running the following command:
kubectl get crds
The output lists the following CRDs and others not related to the Gateway API, all with different creation dates:
NAME CREATED AT gatewayclasses.gateway.networking.k8s.io 2023-08-08T05:29:03Z gateways.gateway.networking.k8s.io 2023-08-08T05:29:03Z grpcroutes.gateway.networking.k8s.io 2023-08-08T05:29:03Z httproutes.gateway.networking.k8s.io 2023-08-08T05:29:03Z referencegrants.gateway.networking.k8s.io 2023-08-08T05:29:04Z tcproutes.gateway.networking.k8s.io 2023-08-08T05:29:04Z tdgrpcroutes.net.gke.io 2023-08-08T05:29:23Z tdmeshes.net.gke.io 2023-08-08T05:29:23Z tlsroutes.gateway.networking.k8s.io 2023-08-08T05:29:05Z udproutes.gateway.networking.k8s.io 2023-08-08T05:29:05Z
The custom resources tdmeshes.net.gke.io
and tdgrpcroutes.net.gke.io
are installed
in the previous step.
The CRDs that are part of the net.gke.io
API group are
specific to GKE. These resources are not part of the OSS
Gateway API implementation, which is in the networking.k8s.io
API group.
Register the cluster to a fleet
After the cluster is successfully created, you must register the cluster to a fleet. Registering your cluster to a fleet lets you selectively enable features on the registered cluster.
Register the cluster to the fleet:
gcloud container hub memberships register gke-1 \ --gke-cluster us-west1-a/gke-1 \ --location global \ --project=PROJECT_ID
Confirm that the cluster is registered with the fleet:
gcloud container hub memberships list --project=PROJECT_ID
The output is similar to the following:
NAME EXTERNAL_ID gke-1 657e835d-3b6b-4bc5-9283-99d2da8c2e1b
(Optional) Enable Multi-Cluster Service Discovery
The Multi-Cluster Service Discovery feature lets you export cluster local services to all clusters registered to the fleet. This step is optional if you don't plan to include more than one cluster in your fleet.
Enable Multi-Cluster Service Discovery:
gcloud container hub multi-cluster-services enable \ --project PROJECT_ID
Grant the Identity and Access Management (IAM) role required for Multi-Cluster Service Discovery:
gcloud projects add-iam-policy-binding PROJECT_ID \ --member "serviceAccount:PROJECT_ID.svc.id.goog[gke-mcs/gke-mcs-importer]" \ --role "roles/compute.networkViewer"
Confirm that Multi-Cluster Service Discovery is enabled for the registered cluster. It might take several minutes for all of the clusters to be displayed:
gcloud container hub multi-cluster-services describe --project=PROJECT_ID
You should see the memberships for
gke-1
, which are similar to the following:createTime: '2021-04-02T19:34:57.832055223Z' membershipStates projects/PROJECT_NUM/locations/global/memberships/gke-1: state: code: OK description: Firewall successfully updated updateTime: '2021-05-27T11:03:07.770208064Z' name: projects/PROJECT_NUM/locations/global/features/multiclusterservicediscovery resourceState: state: ACTIVE spec: {} updateTime: '2021-04-02T19:34:58.983512446Z'
Enable the Cloud Service Mesh GKE service mesh
In this section, you enable the service mesh.
Enable the Cloud Service Mesh GKE service mesh on the cluster that you registered with your fleet:
gcloud container hub ingress enable \ --config-membership=projects/PROJECT_ID/locations/global/memberships/gke-1 \ --project=PROJECT_ID
Confirm that the feature is enabled:
gcloud container hub ingress describe --project=PROJECT_ID
You should see output similar to the following:
createTime: '2021-05-26T13:27:37.460383111Z' membershipStates: projects/PROJECT_NUM/locations/global/memberships/gke-1: state: code: OK updateTime: '2021-05-27T15:08:19.397896080Z' resourceState: state: ACTIVE spec: multiclusteringress: configMembership: projects/PROJECT_ID/locations/global/memberships/gke-1 state: state: code: OK description: Ready to use updateTime: '2021-05-26T13:27:37.899549111Z' updateTime: '2021-05-27T15:08:19.397895711Z'
Grant the following Identity and Access Management (IAM) roles, which are required by the Gateway API controller:
- roles/container.developer — This role allows the controller to manage in-cluster Kubernetes resources.
- roles/compute.networkAdmin — This role allows the controller to manage Cloud Service Mesh service mesh configurations.
export PROJECT_NUMBER=$(gcloud projects describe PROJECT_ID --format="value (projectNumber)") gcloud projects add-iam-policy-binding PROJECT_ID \ --member "serviceAccount:service-${PROJECT_NUMBER}@gcp-sa-multiclusteringress.iam.gserviceaccount.com" \ --role "roles/container.developer"
gcloud projects add-iam-policy-binding PROJECT_ID \ --member "serviceAccount:service-${PROJECT_NUMBER}@gcp-sa-multiclusteringress.iam.gserviceaccount.com" \ --role "roles/compute.networkAdmin"
What's next
To set up an example deployment, read these guides:
- Set up an Envoy sidecar service mesh
- Set up a proxyless gRPC service mesh
- Set up a multi-cluster service mesh