Set up a multi-cluster service mesh
This guide demonstrates how to add a new GKE cluster to an existing service mesh.
Before you begin
Before you add a cluster, make sure that you complete the instructions in Prepare to deploy with the GKE Gateway API, including Enable Multi-cluster Services.
Create a new GKE cluster
Create a new cluster by using the following command:
gcloud container clusters create gke-2 \ --zone=us-west1-a \ --enable-ip-alias \ --workload-pool=PROJECT_ID.svc.id.goog \ --scopes=https://www.googleapis.com/auth/cloud-platform \ --release-channel regular \ --project=PROJECT_ID
Switch to the cluster you just created by issuing the following command:
gcloud container clusters get-credentials gke-2 --zone us-west1-a
Rename the cluster context:
kubectl config rename-context gke_PROJECT_ID_us-west1-a_gke-2 gke-2
Register the cluster to a Fleet
After the cluster is created, register the cluster to your Fleet:
gcloud alpha container hub memberships register gke-2 \ --gke-cluster us-west1-a/gke-2 \ --enable-workload-identity \ --project=PROJECT_ID
Verify that the clusters are registered with the Fleet:
gcloud alpha container hub memberships list --project=PROJECT_ID
The Fleet includes both the cluster you just created and the cluster you created previously:
NAME EXTERNAL_ID gke-1 657e835d-3b6b-4bc5-9283-99d2da8c2e1b gke-2 f3727836-9cb0-4ffa-b0c8-d51001742f19
Deploy Envoy sidecar injector to the new GKE cluster
Follow the instructions for deploying the Envoy sidecar injector
and deploy the injector to cluster gke-2
.
Expand your service mesh to the new GKE cluster
Deploy an Envoy sidecar service mesh
shows you how to configure a service mesh in cluster gke-1
,
where the store
service runs. This section shows you how to expand the service
mesh to include a payments
service running in cluster gke-2
. A Mesh
resource already exists in the config cluster, so you don't need to create a
Mesh
resource in the new cluster.
Deploy the payments
service
In the
payments.yaml
file, save the following manifest:kind: Namespace apiVersion: v1 metadata: name: payments --- apiVersion: apps/v1 kind: Deployment metadata: name: payments namespace: payments spec: replicas: 2 selector: matchLabels: app: payments version: v1 template: metadata: labels: app: payments version: v1 spec: containers: - name: whereami image: gcr.io/google-samples/whereami:v1.2.19 ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: payments namespace: payments spec: selector: app: payments ports: - port: 8080 targetPort: 8080
Apply the manifest to the cluster
gke-2
:kubectl apply --context gke-2 -f payments.yaml
Export the payments
service
All Gateway API resources are stored centrally in the config cluster gke-1
.
Services in other clusters in the Fleet must be exported so that Gateway API
resources in the gke-1
cluster can reference them when you configure the
service mesh's networking behavior.
For a detailed explanation of how ServiceExport
and ServiceImport
work,
read Multi-cluster Services.
Create the namespace
payments
in clustergke-1
. Thepayments
service in clustergke-1
is exported to all clusters in the Fleet that are in the same namespace.kubectl create namespace payments --context gke-1
In the
export-payments.yaml
file, save the following manifest:kind: ServiceExport apiVersion: net.gke.io/v1 metadata: name: payments namespace: payments
Apply the
ServiceExport
manifest in thegke-2
cluster:kubectl apply --context gke-2 -f export-payments.yaml
After a few minutes, run the following command to verify that the accompanying
serviceImports
were created by the multi-cluster Services controller ingke-1
:kubectl get serviceimports --context gke-1 --namespace payments
The output should look similar to the following:
NAME TYPE IP AGE payments ClusterSetIP ["10.112.31.15"] 6m54s
Configure an HTTPRoute
resource for the payments
service
In the
payments-route.yaml
file, save the followingHTTPRoute
manifest:apiVersion: gateway.networking.k8s.io/v1alpha2 kind: HTTPRoute metadata: name: payments-route namespace: payments spec: parentRefs: - name: td-mesh namespace: default group: net.gke.io kind: TDMesh hostnames: - "example.com" rules: - matches: - path: type: PathPrefix value: /payments backendRefs: - group: net.gke.io kind: ServiceImport namespace: payments name: payments port: 8080
Apply the route manifest to
gke-1
:kubectl apply --context gke-1 -f payments-route.yaml
Validate the deployment
Inspect the Mesh
status and events to verify that the Mesh
and HTTPRoute
are deployed successfully.
Run the following command:
kubectl describe tdmesh td-mesh -–context gke-1
The output should be similar to the following:
... Status: Conditions: Last Transition Time: 2022-04-14T22:49:56Z Message: Reason: MeshReady Status: True Type: Ready Last Transition Time: 2022-04-14T22:27:17Z Message: Reason: Scheduled Status: True Type: Scheduled Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ADD 23m mc-mesh-controller Processing mesh default/td-mesh Normal UPDATE 23m mc-mesh-controller Processing mesh default/td-mesh Normal SYNC 23m mc-mesh-controller Processing mesh default/td-mesh Normal SYNC 71s mc-mesh-controller SYNC on default/td-mesh was a success
To verify the deployment, deploy a client Pod to one of the clusters. In the
client.yaml
file, save the following:apiVersion: apps/v1 kind: Deployment metadata: labels: run: client name: client namespace: default spec: replicas: 1 selector: matchLabels: run: client template: metadata: labels: run: client spec: containers: - name: client image: curlimages/curl command: - sh - -c - while true; do sleep 1; done
Apply the manifest:
kubectl apply -f client.yaml --context $CLUSTER
The sidecar injector running in the cluster automatically injects an Envoy container into the client Pod.
To verify that the Envoy container is injected, run the following command:
kubectl describe pods -l run=client --context $CLUSTER
The output is similar to the following:
... Init Containers: # Istio-init sets up traffic interception for the Pod. istio-init: ... # td-bootstrap-writer generates the Envoy bootstrap file for the Envoy container td-bootstrap-writer: ... Containers: # client is the client container that runs application code. client: ... # Envoy is the container that runs the injected Envoy proxy. envoy: ...
After the
mesh
and the client Pod are provisioned, send a request from the client Pod to thestore
service:# Get the name of the client Pod. CLIENT_POD=$(kubectl get pod --context $CLUSTER -l run=client -o=jsonpath='{.items[0].metadata.name}') # The VIP where the following request will be sent. Because requests from the # Busybox container are redirected to the Envoy proxy, the IP address can # be any other address, such as 10.0.0.2 or 192.168.0.1. VIP='10.0.0.1' # Command to send a request to store. TEST_CMD="curl -v -H 'Host: example.com' $VIP/store" # Execute the test command in the client container. kubectl exec -it $CLIENT_POD -c client --context $CLUSTER -- /bin/sh -c "$TEST_CMD"
The output should show that one of the
store
Pods ingke-1
serves the request:{ "cluster_name": "gke-1", "zone": "us-central1-a", "host_header": "example.com", ... }
Send a request to the
payments
service:# Command to send a request to payments. TEST_CMD="curl -v -H 'host: example.com' $VIP/payments" # Execute the test command in the client container. kubectl exec -it $CLIENT_POD -c client -- /bin/sh -c "$TEST_CMD"
The output should show that one of the
payments
Pods in gke-2 serves the request:{ "cluster_name": "gke-2", "zone": "us-west1-a", "host_header": "example.com", ... }