This page describes how to deploy Kubernetes Gateway resources for load balancing across multiple Google Kubernetes Engine (GKE) clusters. Before deploying multi-cluster Gateways, see Enabling multi-cluster Gateways to prepare your environment. For deploying Gateways to just a single GKE cluster, see Deploying Gateways.
Multi-cluster Gateways
A multi-cluster Gateway is a Gateway resource that load balances traffic across
multiple Kubernetes clusters. In GKE the gke-l7-gxlb-mc
, gke-l7-global-external-managed-mc
and
gke-l7-rilb-mc
GatewayClasses deploy multi-cluster Gateways that provide HTTP routing, traffic splitting, traffic mirroring, health-based failover, and more across different GKE
clusters, Kubernetes Namespaces, and across different regions. Multi-cluster Gateways make managing application networking across many clusters and
teams easy, secure, and scalable for infrastructure administrators.
This page introduces three examples to teach you how to deploy multi-cluster Gateways using the GKE Gateway controller:
- Example 1: An external, multi-cluster Gateway providing load balancing across two GKE clusters for internet traffic.
- Example 2: Blue-green, weight-based traffic splitting and traffic mirroring across two GKE clusters for internal VPC traffic.
Each of the examples will use the same store and site applications to model a real-world scenario where an online shopping service and a website service are owned and operated by separate teams and deployed across a fleet of shared GKE clusters. Each of the examples highlights different topologies and use cases enabled by multi-cluster Gateways.
Multi-cluster Gateways require some environmental preparation before they can be deployed. Before proceeding, follow the steps in Enabling multi-cluster Gateways:
Deploy GKE clusters.
Register your clusters to a fleet.
Enable the multi-cluster Service and multi-cluster Gateway controllers.
Lastly, review the GKE Gateway controller Preview limitations and known issues before using it in your environment.
Multi-cluster, multi-region, external Gateway
In this tutorial, you will create an external multi-cluster Gateway that serves external traffic across an application running in two GKE clusters.
In the following steps you:
- Deploy the sample
store
application to thegke-west-1
andgke-east-1
clusters. - Configure ServiceExport resources on each cluster to export the Services into your fleet.
- Deploy a
gke-l7-gxlb-mc
Gateway and an HTTPRoute to your config cluster,gke-west-1
. You could use agke-l7-global-external-managed
Gateway as an alternative to benefit from all the advanced traffic management capabilities of the global external HTTP(S) load balancer.
After the application and Gateway resources are deployed, you can control traffic across the two GKE clusters using path-based routing:
- Requests to
/west
are routed tostore
Pods in thegke-west-1
cluster. - Requests to
/east
are routed tostore
Pods in thegke-east-1
cluster. - Requests to any other path are routed to either cluster, according to its health, capacity, and proximity to the requesting client.
Deploying the demo application
Create the
store
Deployment and Namespace in all three of the clusters that were deployed in Enabling multi-cluster Gateways:kubectl apply --context gke-west-1 -f https://raw.githubusercontent.com/GoogleCloudPlatform/gke-networking-recipes/main/gateway/gke-gateway-controller/multi-cluster-gateway/store.yaml kubectl apply --context gke-west-2 -f https://raw.githubusercontent.com/GoogleCloudPlatform/gke-networking-recipes/main/gateway/gke-gateway-controller/multi-cluster-gateway/store.yaml kubectl apply --context gke-east-1 -f https://raw.githubusercontent.com/GoogleCloudPlatform/gke-networking-recipes/main/gateway/gke-gateway-controller/multi-cluster-gateway/store.yaml
It deploys the following resources to each cluster:
namespace/store created deployment.apps/store created
All examples in this page use the app deployed in this step. Make sure that the app is deployed across all three clusters before trying any of the remaining steps. This example uses only clusters
gke-west-1
andgke-east-1
, andgke-west-2
is used in another example.
Multi-cluster Services
Services are how Pods are exposed to clients. Because the GKE Gateway controller uses container-native load balancing, it does not use the ClusterIP or Kubernetes load balancing to reach Pods. Traffic is sent directly from the load balancer to the Pod IPs. However, Services still play a critical role as a logical identifier for Pod grouping.
Multi-cluster Services (MCS) is both an API standard for Services that span clusters and a GKE controller that provides service discovery across GKE clusters. The multi-cluster Gateway controller uses MCS API resources to group Pods into a Service that is addressable across or spans multiple custers.
The multi-cluster Services API defines the following custom resources:
- ServiceExports map to a Kubernetes Service, exporting the endpoints of that Service to all clusters registered to the fleet. When a Service has a corresponding ServiceExport it means that the Service can be addressed by a multi-cluster Gateway.
- ServiceImports are automatically generated by the multi-cluster Service controller. ServiceExport and ServiceImports come in pairs. If a ServiceExport exists in the fleet, then a corresponding ServiceImport is created to allow the Service mapped to the ServiceExport to be accessed from across clusters.
Exporting Services works in the following way. A store Service exists in
gke-1
which selects a group of Pods in that cluster. A ServiceExport is
created in the cluster which allows the Pods in gke-1
to become accessible from the other clusters in the fleet. The ServiceExport will map to and expose Services that have the same name and Namespace as the ServiceExport resource.
apiVersion: v1
kind: Service
metadata:
name: store
namespace: store
spec:
selector:
app: store
ports:
- port: 8080
targetPort: 8080
---
kind: ServiceExport
apiVersion: net.gke.io/v1
metadata:
name: store
namespace: store
The following diagram shows what happens after a ServiceExport is deployed. If a
ServiceExport and Service pair exist then the multi-cluster Service controller
deploys a corresponding ServiceImport to every GKE cluster in
the fleet. The ServiceImport is the local representation of the store
Service
in every cluster. This enables the client Pod in gke-2
to use ClusterIP or
headless Services to reach the store
Pods in gke-1
. When used in this manner
multi-cluster Services provide east-west load balancing between clusters. To use multi-cluster Services for cluster-to-cluster load balancing, see Configuring multi-cluster Services.
Multi-cluster Gateways also use ServiceImports, but not for cluster-to-cluster load balancing. Instead, Gateways use ServiceImports as logical identifiers for a Service that exists in another cluster or that stretches across multiple clusters. The following HTTPRoute references a ServiceImport instead of a Service resource. By referencing a ServiceImport, this indicates that it is forwarding traffic to a group of backend Pods that run across one or more clusters.
kind: HTTPRoute
apiVersion: gateway.networking.k8s.io/v1beta1
metadata:
name: store-route
namespace: store
labels:
gateway: multi-cluster-gateway
spec:
parentRefs:
- kind: Gateway
namespace: store
name: external-http
hostnames:
- "store.example.com"
rules:
- backendRefs:
- name: store
port: 8080
The following diagram shows how the HTTPRoute routes store.example.com
traffic to
store
Pods on gke-1
and gke-2
. The load balancer treats them as one
pool of backends. If the Pods from one of the clusters becomes unhealthy,
unreachable, or has no traffic capacity, then traffic load is balanced to the
remaining Pods on the other cluster. New clusters can be added or removed with
the store
Service and ServiceExport. This will transparently add or remove
backend Pods without any explicit routing configuration changes.
Exporting Services
At this point, the application is running across both clusters. Next, you will expose and export the applications by deploying Services and ServiceExports to each cluster.
Save the following manifest to a file named
store-west-service.yaml
:apiVersion: v1 kind: Service metadata: name: store namespace: store spec: selector: app: store ports: - port: 8080 targetPort: 8080 --- kind: ServiceExport apiVersion: net.gke.io/v1 metadata: name: store namespace: store --- apiVersion: v1 kind: Service metadata: name: store-west-1 namespace: store spec: selector: app: store ports: - port: 8080 targetPort: 8080 --- kind: ServiceExport apiVersion: net.gke.io/v1 metadata: name: store-west-1 namespace: store
Deploy this manifest to
gke-west-1
:kubectl apply -f store-west-service.yaml --context gke-west-1
Save the following manifest to a file named
store-east-service.yaml
:apiVersion: v1 kind: Service metadata: name: store namespace: store spec: selector: app: store ports: - port: 8080 targetPort: 8080 --- kind: ServiceExport apiVersion: net.gke.io/v1 metadata: name: store namespace: store --- apiVersion: v1 kind: Service metadata: name: store-east-1 namespace: store spec: selector: app: store ports: - port: 8080 targetPort: 8080 --- kind: ServiceExport apiVersion: net.gke.io/v1 metadata: name: store-east-1 namespace: store
Deploy this
ServiceExport
resource togke-east-1
:kubectl apply -f store-east-service.yaml --context gke-east-1 --namespace store
Verify that the correct ServiceExports have been created in the cluster.
kubectl get serviceexports --context CLUSTER_NAME --namespace store
Replace CLUSTER_NAME with
gke-west-1
andgke-east-1
. The output should resemble the following:# gke-west-1 NAME AGE store 2m40s store-west-1 2m40s # gke-east-1 NAME AGE store 2m25s store-east-1 2m25s
This demonstrates that the
store
Service containsstore
Pods across both clusters while thestore-west-1
andstore-east-1
Services only containstore
Pods on their respective clusters. These overlapping Services are used to target the Pods across multiple clusters or a subset of Pods on a single cluster.After a few minutes verify that the accompanying
ServiceImports
have been automatically created by the multi-cluster Services controller across all clusters in the fleet.kubectl get serviceimports --context CLUSTER_NAME --namespace store
Replace CLUSTER_NAME with
gke-west-1
andgke-east-1
. The output should resemble the following:# gke-west-1 NAME TYPE IP AGE store ClusterSetIP ["10.112.31.15"] 6m54s store-east-1 ClusterSetIP ["10.112.26.235"] 5m49s store-west-1 ClusterSetIP ["10.112.16.112"] 6m54s # gke-east-1 NAME TYPE IP AGE store ClusterSetIP ["10.72.28.226"] 5d10h store-east-1 ClusterSetIP ["10.72.19.177"] 5d10h store-west-1 ClusterSetIP ["10.72.28.68"] 4h32m
This demonstrates that all three Services are accessible from both clusters in
the fleet. However, because there is only a single active config cluster per
fleet, you can only deploy Gateways and HTTPRoutes that reference these
ServiceImports in gke-west-1
. When an HTTPRoute in the config cluster
references these ServiceImports as backends, the Gateway can forward
traffic to these Services no matter which cluster they are exported from.
Deploy the Gateway and HTTPRoute
Once the applications have been deployed, you can then configure a Gateway using
the gke-l7-gxlb-mc
GatewayClass. This Gateway creates an external HTTP(S) load balancer
configured to distribute traffic across your target clusters.
Save the following
Gateway
manifest to a file namedexternal-http-gateway.yaml
:kind: Gateway apiVersion: gateway.networking.k8s.io/v1beta1 metadata: name: external-http namespace: store spec: gatewayClassName: gke-l7-gxlb-mc listeners: - name: http protocol: HTTP port: 80 allowedRoutes: kinds: - kind: HTTPRoute
This uses the
gke-l7-gxlb-mc
GatewayClassApply the Gateway manifest to this example's
gke-west-1
config cluster:kubectl apply -f external-http-gateway.yaml --context gke-west-1 --namespace store
Save the following HTTPRoute manifest to a file named
public-store-route.yaml
:kind: HTTPRoute apiVersion: gateway.networking.k8s.io/v1beta1 metadata: name: public-store-route namespace: store labels: gateway: external-http spec: hostnames: - "store.example.com" parentRefs: - name: external-http rules: - matches: - path: type: PathPrefix value: /west backendRefs: - group: net.gke.io kind: ServiceImport name: store-west-1 port: 8080 - matches: - path: type: PathPrefix value: /east backendRefs: - group: net.gke.io kind: ServiceImport name: store-east-1 port: 8080 - backendRefs: - group: net.gke.io kind: ServiceImport name: store port: 8080
Once deployed, this HTTPRoute will configure the following routing behavior:
- Requests to
/west
are routed tostore
Pods in thegke-west-1
cluster. - Requests to
/east
are routed tostore
Pods in thegke-east-1
cluster. - Requests to any other path are routed to either cluster, according to its health, capacity, and proximity to the requesting client.
Note that if all the Pods on a given cluster are unhealthy (or do not exist)
then traffic to the store
Service would only be sent to clusters that actually
have store
Pods. The existence of a ServiceExport and Service on a given
cluster does not guarantee that traffic will be sent to that cluster. Pods must
exist and be responding affirmatively to the load balancer health check or else
the load balancer will just send traffic to healthy store
Pods in other clusters.
Apply the
HTTPRoute
manifest to this example'sgke-1
config cluster:kubectl apply -f public-store-route.yaml --context gke-west-1 --namespace store
The following diagram shows the resources you've deployed across both clusters.
Because gke-west-1
is the Gateway config cluster, it
is the cluster in which our Gateway, HTTPRoutes, and ServiceImports are watched
by the Gateway controller. Each cluster has a store
ServiceImport and another
ServiceImport specific to that cluster. Both point at the same Pods. This lets
the HTTPRoute to specify exactly where traffic should go - to the store
Pods
on a specific cluster or to the store
Pods across all clusters.
Note that this is a logical resource model, not a depiction of the traffic flow. The traffic path goes directly from the load balancer to backend Pods and has no direct relation to whichever cluster is the config cluster.
Validating deployment
You can now issue requests to our multi-cluster Gateway and distribute traffic across both GKE clusters.
Validate that the Gateway and HTTPRoute have been deployed successfully by inspecting the Gateway status and events.
kubectl describe gateways.gateway.networking.k8s.io external-http --context gke-west-1 --namespace store
Your output should look similar to the following:
Spec: Gateway Class Name: gke-l7-gxlb-mc Listeners: Port: 80 Protocol: HTTP Routes: Group: networking.k8s.io Kind: HTTPRoute Namespaces: From: Same Selector: Match Labels: Gateway: external-http Status: Addresses: Type: IPAddress Value: 34.120.172.213 Conditions: Last Transition Time: 1970-01-01T00:00:00Z Message: Waiting for controller Reason: NotReconciled Status: False Type: Scheduled Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal UPDATE 29m (x2 over 29m) global-gke-gateway-ctlr store/external-http Normal SYNC 59s (x9 over 29m) global-gke-gateway-ctlr SYNC on store/external-http was a success
Once the Gateway has deployed successfully retrieve the external IP address from
external-http
Gateway.kubectl get gateways.gateway.networking.k8s.io external-http -o=jsonpath="{.status.addresses[0].value}" --context gke-west-1 --namespace store
Replace
VIP
in the following steps with the IP address you receive as output.Send traffic to the root path of the domain. This load balances traffic to the
store
ServiceImport which is across clustergke-west-1
andgke-east-1
. The load balancer sends your traffic to the closest region to you and you might not see responses from the other region.curl -H "host: store.example.com" http://VIP
The output confirms that the request was served by Pod from the
gke-west-1
cluster:{ "cluster_name": "gke-east-1", "zone": "us-east1-b", "host_header": "store.example.com", "node_name": "gke-gke-east-1-default-pool-7aa30992-t2lp.c.agmsb-k8s.internal", "pod_name": "store-5f5b954888-dg22z", "pod_name_emoji": "⏭", "project_id": "agmsb-k8s", "timestamp": "2021-06-01T17:32:51" }
Next send traffic to the
/west
path. This routes traffic to thestore-west
ServiceImport which only has Pods running on thegke-west-1
cluster. A cluster-specific ServiceImport likestore-west
enables an application owner to explicitly send traffic to a specific cluster, rather than letting the load balancer make the decision.curl -H "host: store.example.com" http://VIP/west
The output confirms that the request was served by Pod from the
gke-west-1
cluster:{ "cluster_name": "gke-west-1", "zone": "us-west1-a", "host_header": "store.example.com", "node_name": "gke-gke-west-1-default-pool-65059399-2f41.c.agmsb-k8s.internal", "pod_name": "store-5f5b954888-d25m5", "pod_name_emoji": "🍾", "project_id": "agmsb-k8s", "timestamp": "2021-06-01T17:39:15", }
Finally, send traffic to the
/east
path.curl -H "host: store.example.com" http://VIP/east
The output confirms that the request was served by Pod from the
gke-east-1
cluster:{ "cluster_name": "gke-east-1", "zone": "us-east1-b", "host_header": "store.example.com", "node_name": "gke-gke-east-1-default-pool-7aa30992-7j7z.c.agmsb-k8s.internal", "pod_name": "store-5f5b954888-hz6mw", "pod_name_emoji": "🧜🏾", "project_id": "agmsb-k8s", "timestamp": "2021-06-01T17:40:48" }
Blue-green, multi-cluster routing with Gateway
The gke-l7-rilb-*
and gke-l7-global-external-managed-*
GatewayClasses have
many advanced traffic routing capabilities including traffic splitting, header
matching, header manipulation, traffic mirroring, and more. In this example,
you'll demonstrate how to use weight-based traffic splitting to explicitly
control the traffic proportion across two GKE clusters.
This example goes through some realistic steps that a service owner would take in moving or expanding their application to a new GKE cluster. The goal of blue-green deployments is to reduce risk through multiple validation steps which confirm that the new cluster is operating correctly. This example walks through four stages of deployment:
- 100%-Header-based canary: Use HTTP header routing to send only test or synthetic traffic to the new cluster.
- 100%-Mirror traffic: Mirror user traffic to the canary cluster. This tests the capacity of the canary cluster by copying 100% of the user traffic to this cluster.
- 90%-10%: Canary a traffic split of 10% to slowly expose the new cluster to live traffic.
- 0%-100%: Cutover fully to the new cluster with the option of switching back if any errors are observed.
This example is similar to the previous one, except it deploys an internal multi-cluster Gateway instead. This deploys an internal HTTP(S) load balancer which is only privately accessible from within the VPC. You will use the clusters and same application that you deployed in the previous steps, except deploy them through a different Gateway.
Prerequisites
The following example builds on some of the steps in Deploying an external multi-cluster Gateway. Ensure that you have done the following steps before proceeding with this example:
This example uses the gke-west-1
and gke-west-2
clusters that you already set up. These clusters are in the same region because the gke-l7-rilb-mc
GatewayClass is regional and only supports cluster backends in the same region.
Deploy the Service and ServiceExports needed on each cluster. If you deployed Services and ServiceExports from the previous example then you already deployed some of these.
kubectl apply --context gke-west-1 -f https://raw.githubusercontent.com/GoogleCloudPlatform/gke-networking-recipes/main/gateway/gke-gateway-controller/multi-cluster-gateway/store-west-1-service.yaml kubectl apply --context gke-west-2 -f https://raw.githubusercontent.com/GoogleCloudPlatform/gke-networking-recipes/main/gateway/gke-gateway-controller/multi-cluster-gateway/store-west-2-service.yaml
It deploys a similar set of resources to each cluster:
service/store created serviceexport.net.gke.io/store created service/store-west-2 created serviceexport.net.gke.io/store-west-2 created
Configuring a proxy-only subnet
If you have not already done so, configure a proxy-only subnet for each region in which you are deploying internal Gateways. This subnet is used to provide internal IP addresses to the load balancer proxies.
You must create a proxy-only subnet before you create Gateways that manage internal HTTP(S) load balancers. Each region of a Virtual Private Cloud (VPC) in which you use internal HTTP(S) load balancers must have a proxy-only subnet.
The gcloud compute networks subnets create
command creates a proxy-only a subnet.
gcloud compute networks subnets create SUBNET_NAME \
--purpose=REGIONAL_MANAGED_PROXY \
--role=ACTIVE \
--region=REGION \
--network=VPC_NETWORK_NAME \
--range=CIDR_RANGE
Replace the following:
SUBNET_NAME
: the name of the proxy-only subnet.REGION
: the region of the proxy-only subnet.VPC_NETWORK_NAME
: the name of the VPC network that contains the subnet.CIDR_RANGE
: the primary IP address range of the subnet. You must use a subnet mask no larger than/26
so that at least 64 IP addresses are available for proxies in the region. The recommended subnet mask is/23
.
If the following event appears on your internal Gateway, a proxy-only subnet does not exist for that region. To resolve this issue, deploy a proxy-only subnet.
generic::invalid_argument: error ensuring load balancer: Insert: Invalid value for field 'resource.target': 'regions/us-west1/targetHttpProxies/gkegw-x5vt-default-internal-http-2jzr7e3xclhj'. A reserved and active subnetwork is required in the same region and VPC as the forwarding rule.
Deploying the Gateway
The following Gateway is created from the gke-l7-rilb-mc
GatewayClass. This is
a regional Gateway which can only target GKE clusters in the same
region.
If you have not done so already, create a proxy-only subnet for the internal load balancer. This is required for the internal load balancer to function correctly. It creates a subnet that is used to provide IP addressing to the load balancer proxies. Create the proxy-only subnet in the same region that your cluster is in.
Save the following Gateway manifest to a file named
internal-http-gateway.yaml
:kind: Gateway apiVersion: gateway.networking.k8s.io/v1beta1 metadata: name: internal-http namespace: store spec: gatewayClassName: gke-l7-rilb-mc listeners: - name: http protocol: HTTP port: 80 allowedRoutes: kinds: - kind: HTTPRoute
Apply the Gateway manifest to this example's
gke-west-1
config cluster:kubectl apply -f internal-http-gateway.yaml --context gke-west-1 --namespace store
Validate that the Gateway has come up successfully. You can filter for just the events from this Gateway with the following command:
kubectl get events --field-selector involvedObject.kind=Gateway,involvedObject.name=internal-http --context=gke-west-1 --namespace store
The Gateway deployment was successful if the output resembles the following:
LAST SEEN TYPE REASON OBJECT MESSAGE 22m Normal ADD gateway/internal-http store/internal-http 6m50s Normal UPDATE gateway/internal-http store/internal-http 11m Normal SYNC gateway/internal-http store/internal-http 3m26s Normal SYNC gateway/internal-http SYNC on store/internal-http was a success
Header-based canary
Header-based canarying lets the service owner match synthetic test traffic that does not come from real users. This is an easy way of validating that the basic networking of the application is functioning without exposing users directly.
Save the following manifest as a file named
internal-route-stage-1.yaml
.kind: HTTPRoute apiVersion: gateway.networking.k8s.io/v1beta1 metadata: name: internal-store-route namespace: store labels: gateway: internal-http spec: parentRefs: - kind: Gateway namespace: store name: internal-http hostnames: - "store.example.internal" rules: # Matches for env=canary and sends it to store-west-2 ServiceImport - matches: - headers: - name: env value: canary backendRefs: - group: net.gke.io kind: ServiceImport name: store-west-2 port: 8080 # All other traffic goes to store-west-1 ServiceImport - backendRefs: - group: net.gke.io kind: ServiceImport name: store-west-1 port: 8080
Once deployed, this HTTPRoute configures the following routing behavior:
- Internal requests to
store.example.internal
without theenv: canary
HTTP header are routed tostore
Pods on thegke-west-1
cluster - Internal requests to
store.example.internal
with theenv: canary
HTTP header are routed tostore
Pods on thegke-west-2
cluster
Apply
internal-route-stage-1.yaml
to thegke-west-1
cluster:kubectl apply -f internal-route-stage-1.yaml --context gke-west-1 --namespace store
Validate that the HTTPRoute is functioning correctly by sending traffic to the Gateway IP address.
Retrieve the internal IP address from
internal-http
.kubectl get gateways.gateway.networking.k8s.io internal-http -o=jsonpath="{.status.addresses[0].value}" --context gke-west-1 --namespace store
Replace VIP in the following steps with the IP address you receive as output.
Send a request to the Gateway using the
env: canary
HTTP header. This will confirm that traffic is being routed togke-west-2
. Use a private client in the same VPC as the GKE clusters to confirm that requests are being routed correctly. The following command must be run on a machine that has private access to the Gateway IP address or else it will not function.curl -H "host: store.example.internal" -H "env: canary" http://VIP
The output confirms that the request was served by a Pod from the
gke-west-2
cluster:{ "cluster_name": "gke-west-2", "host_header": "store.example.internal", "node_name": "gke-gke-west-2-default-pool-4cde1f72-m82p.c.agmsb-k8s.internal", "pod_name": "store-5f5b954888-9kdb5", "pod_name_emoji": "😂", "project_id": "agmsb-k8s", "timestamp": "2021-05-31T01:21:55", "zone": "us-west1-a" }
Traffic mirror
This stage sends traffic to the intended cluster but also mirrors that traffic to the canary cluster.
Save the following manifest as a file named
internal-route-stage-2.yaml
.kind: HTTPRoute apiVersion: gateway.networking.k8s.io/v1beta1 metadata: name: internal-store-route namespace: store labels: gateway: internal-http spec: parentRefs: - kind: Gateway namespace: store name: internal-http hostnames: - "store.example.internal" rules: # Sends all traffic to store-west-1 ServiceImport - backendRefs: - name: store-west-1 group: net.gke.io kind: ServiceImport port: 8080 # Also mirrors all traffic to store-west-2 ServiceImport filters: - type: RequestMirror requestMirror: backendRef: group: net.gke.io kind: ServiceImport name: store-west-2 port: 8080
Apply
internal-route-stage-2.yaml
to thegke-west-1
cluster:kubectl apply -f internal-route-stage-2.yaml --context gke-west-1 --namespace store
Using your private client, send a request to the
internal-http
Gateway. Use the/mirror
path so you can uniquely identify this request in the application logs in a later step.curl -H "host: store.example.internal" http://VIP/mirror
The output confirms that the client received a response from a Pod in the
gke-west-1
cluster:{ "cluster_name": "gke-west-1", "host_header": "store.example.internal", "node_name": "gke-gke-west-1-default-pool-65059399-ssfq.c.agmsb-k8s.internal", "pod_name": "store-5f5b954888-brg5w", "pod_name_emoji": "🎖", "project_id": "agmsb-k8s", "timestamp": "2021-05-31T01:24:51", "zone": "us-west1-a" }
This confirms that the primary cluster is responding to traffic. You still need to confirm that the cluster you are migrating to is receiving mirrored traffic.
Check the application logs of a
store
Pod on thegke-west-2
cluster. The logs should confirm that the Pod received mirrored traffic from the load balancer.kubectl logs deployment/store --context gke-west-2 -n store | grep /mirror
This output confirms that Pods on the
gke-west-2
cluster are also receiving the same requests, however their responses to these requests are not sent back to the client. The IP addresses seen in the logs are that of the load balancer's internal IP addresses which are communicating with your Pods.Found 2 pods, using pod/store-5f5b954888-gtldf 2021-05-31 01:32:21,416 ... "GET /mirror HTTP/1.1" 200 - 2021-05-31 01:32:23,323 ... "GET /mirror HTTP/1.1" 200 - 2021-05-31 01:32:24,137 ... "GET /mirror HTTP/1.1" 200 -
Using mirroring is helpful to determine how traffic load will impact application performance without impacting responses to your clients in any way. It may not be necessary for all kinds of rollouts, but can be useful when rolling out large changes that could impact performance or load.
Traffic split
Traffic splitting is one of the most common methods of rolling out new code or deploying to new environments safely. The service owner sets an explicit percentage of traffic that is sent to the canary backends that is typically a very small amount of the overall traffic so that the success of the rollout can be determined with an acceptable amount of risk to real user requests.
Save the following manifest as a file named
internal-route-stage-3.yaml
.kind: HTTPRoute apiVersion: gateway.networking.k8s.io/v1beta1 metadata: name: internal-store-route namespace: store labels: gateway: internal-http spec: parentRefs: - kind: Gateway namespace: store name: internal-http hostnames: - "store.example.internal" rules: - backendRefs: # 90% of traffic to store-west-1 ServiceImport - name: store-west-1 group: net.gke.io kind: ServiceImport port: 8080 weight: 90 # 10% of traffic to store-west-2 ServiceImport - name: store-west-2 group: net.gke.io kind: ServiceImport port: 8080 weight: 10
Apply
internal-route-stage-3.yaml
to thegke-west-1
cluster:kubectl apply -f internal-route-stage-3.yaml --context gke-west-1 --namespace store
Using your private client, send a continuous curl request to the
internal- http
Gateway.while true; do curl -H "host: store.example.internal" -s VIP | grep "cluster_name"; sleep 1; done
The output will be similar to this, indicating that a 90/10 traffic split is occurring.
"cluster_name": "gke-west-1", "cluster_name": "gke-west-1", "cluster_name": "gke-west-1", "cluster_name": "gke-west-1", "cluster_name": "gke-west-1", "cluster_name": "gke-west-1", "cluster_name": "gke-west-1", "cluster_name": "gke-west-1", "cluster_name": "gke-west-2", "cluster_name": "gke-west-1", "cluster_name": "gke-west-1", ...
Doing a traffic split with a minority of the traffic enables the service owner to inspect the health of the application and the responses. If all the signals look healthy, then they may proceed to the full cutover.
Traffic cut over
The last stage of the blue-green migration is to fully cut over to the new
cluster and remove the old cluster. If the service owner was actually onboarding
a second cluster to an existing cluster then this last step would be different
as the final step would have traffic going to both clusters. In that scenario a
single store
ServiceImport is recommended that has Pods from both gke-west-1
and gke-west-2
clusters. This allows the load balancer to make the decision of
where traffic should go for an active-active application, based on proximity,
health, and capacity.
Save the following manifest as a file named
internal-route-stage-4.yaml
.kind: HTTPRoute apiVersion: gateway.networking.k8s.io/v1beta1 metadata: name: internal-store-route namespace: store labels: gateway: internal-http spec: parentRefs: - kind: Gateway namespace: store name: internal-http hostnames: - "store.example.internal" rules: - backendRefs: # No traffic to the store-west-1 ServiceImport - name: store-west-1 group: net.gke.io kind: ServiceImport port: 8080 weight: 0 # All traffic to the store-west-2 ServiceImport - name: store-west-2 group: net.gke.io kind: ServiceImport port: 8080 weight: 100
Apply
internal-route-stage-4.yaml
to thegke-west-1
cluster:kubectl apply -f internal-route-stage-4.yaml --context gke-west-1 --namespace store
Using your private client, send a continuous curl request to the
internal- http
Gateway.while true; do curl -H "host: store.example.internal" -s VIP | grep "cluster_name"; sleep 1; done
The output will be similar to this, indicating that all traffic is now going to
gke-west-2
."cluster_name": "gke-west-2", "cluster_name": "gke-west-2", "cluster_name": "gke-west-2", "cluster_name": "gke-west-2", ...
This final step completes a full blue-green application migration from one GKE cluster to another GKE cluster.
Deploy capacity-based load balancing
The exercise in this section demonstrates global load balancing and Service capacity concepts by deploying an application across two GKE clusters in different regions. Generated traffic is sent at various request per second (RPS) levels to show how traffic is load balanced across clusters and regions.
The following diagram shows the topology that you will deploy and how traffic overflows between clusters and regions when traffic has exceeded Service capacity:
To learn more about traffic management, see GKE traffic management.
Prepare your environment
Follow Enabling multi-cluster Gateways to prepare your environment.
Confirm that the GatewayClass resources are installed on the config cluster:
kubectl get gatewayclasses --context=gke-west-1
The output is similar to the following:
NAME CONTROLLER ACCEPTED AGE gke-l7-global-external-managed networking.gke.io/gateway True 16h gke-l7-global-external-managed-mc networking.gke.io/gateway True 16h gke-l7-gxlb networking.gke.io/gateway True 16h gke-l7-gxlb-mc networking.gke.io/gateway True 16h gke-l7-rilb networking.gke.io/gateway True 16h gke-l7-rilb-mc networking.gke.io/gateway True 16h
Deploy an application
Deploy the sample web application server to both clusters:
kubectl apply --context gke-west-1 -f https://raw.githubusercontent.com/GoogleCloudPlatform/gke-networking-recipes/master/gateway/docs/store-traffic-deploy.yaml
kubectl apply --context gke-east-1 -f https://raw.githubusercontent.com/GoogleCloudPlatform/gke-networking-recipes/master/gateway/docs/store-traffic-deploy.yaml
The output is similar to the following:
namespace/store created
deployment.apps/store created
Deploy a Service, Gateway, and HTTPRoute
Save the following sample manifest as
store-service.yaml
:apiVersion: v1 kind: Service metadata: name: store namespace: traffic-test annotations: networking.gke.io/max-rate-per-endpoint: "10" spec: ports: - port: 8080 targetPort: 8080 name: http selector: app: store type: ClusterIP --- kind: ServiceExport apiVersion: net.gke.io/v1 metadata: name: store namespace: traffic-test
This manifest describes a Service and ServiceExport that exports the Service across clusters as a multi-cluster Service. The ServiceExport indicates that the store Service endpoints on this cluster are aggregated with the endpoints of all the other store Services within the fleet. The ServiceExport acts as the enablement flag to convert a Service into a multi-cluster Service. The
max-rate-per-endpoint
is set to10
. With 2 replicas, each Service has 20 RPS of capacity per cluster.For more information on how to choose a Service capacity level for your Service, see Determine your Service's capacity.
Apply the manifest to both clusters:
kubectl apply -f store-service.yaml --context gke-west-1 kubectl apply -f store-service.yaml --context gke-east-1
Save the following sample manifest as
store-gateway.yaml
:kind: Gateway apiVersion: gateway.networking.k8s.io/v1beta1 metadata: name: store namespace: traffic-test spec: gatewayClassName: gke-l7-gxlb-mc listeners: - name: http protocol: HTTP port: 80 allowedRoutes: kinds: - kind: HTTPRoute
The manifest describes an external, global, multi-cluster Gateway that deploys an external HTTP(S) load balancer with a publicly accessible IP address.
Apply the manifest to your config cluster:
kubectl apply -f store-gateway.yaml --context gke-west-1
Save the following sample manifest as
store-route.yaml
:kind: HTTPRoute apiVersion: gateway.networking.k8s.io/v1beta1 metadata: name: store namespace: traffic-test labels: gateway: store spec: parentRefs: - kind: Gateway namespace: traffic-test name: store rules: - backendRefs: - name: store group: net.gke.io kind: ServiceImport port: 8080
The manifest describes an HTTPRoute that configures the Gateway with a routing rule that directs all traffic to the store ServiceImport. The store ServiceImport groups the store Service Pods across both clusters and allows them to be addressed by the load balancer as a single Service.
Apply the manifest to your config cluster:
kubectl apply -f store-route.yaml --context gke-west-1
It might take several minutes for the Gateway to fully deploy. You can check the Gateway's events after a few minutes to see if it has finished deploying:
kubectl describe gateways.gateway.networking.k8s.io store -n traffic-test
The output is similar to the following:
... Status: Addresses: Type: IPAddress Value: 34.117.182.69 Conditions: Last Transition Time: 1970-01-01T00:00:00Z Message: Waiting for controller Reason: NotReconciled Status: False Type: Scheduled Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ADD 8m13s mc-gateway-controller traffic-test/store Normal SYNC 7m57s mc-gateway-controller traffic-test/store Warning SYNC 6m14s mc-gateway-controller failed to translate Gateway "traffic-test/store": no GroupKey's for port 8080 exist Normal UPDATE 2m6s (x3 over 8m13s) mc-gateway-controller traffic-test/store Normal SYNC 2m6s mc-gateway-controller SYNC on traffic-test/store was a success
This output shows that the Gateway has deployed successfully. It might still take a few minutes for traffic to start passing after the Gateway has deployed. Take note of the IP address in this output, as it is used in a following step.
Confirm traffic
Confirm that traffic is passing to the application by testing the Gateway IP address with a curl command:
curl GATEWAY_IP_ADDRESS
The output is similar to the following:
{
"cluster_name": "gke-west-1",
"host_header": "34.117.182.69",
"pod_name": "store-54785664b5-mxstv",
"pod_name_emoji": "👳🏿",
"project_id": "project",
"timestamp": "2021-11-01T14:06:38",
"zone": "us-west1-a"
}
This output shows the Pod metadata, which indicates the region where the request was served from.
Verify traffic using load testing
To verify the load balancer is working, you can deploy a traffic generator in
your gke-west-1
cluster. The traffic generator generates traffic at different
levels of load to demonstrate the capacity and overflow capabilities of the load
balancer. The following steps demonstrate three levels of load:
- 10 RPS, which is under the capacity for the store Service in
gke-west-1
. - 30 RPS, which is over capacity for the
gke-west-1
store Service and causes traffic overflow togke-east-1
. - 60 RPS, which is over capacity for the Services in both clusters.
Configure dashboard
Get the name of the underying URLmap for your Gateway:
kubectl get gateways.gateway.networking.k8s.io store -n traffic-test --context=gke-west-1 -o=jsonpath="{.metadata.annotations.networking\.gke\.io/url-maps}"
The output is similar to the following:
gkemcg-traffic-test-store-armvfyupay1t
In the Google Cloud console, go to the Metrics explorer page.
Click on MQL.
Enter the following query to observe traffic metrics for the store Service across your two clusters:
fetch https_lb_rule | metric 'loadbalancing.googleapis.com/https/backend_request_count' | filter (resource.url_map_name == 'GATEWAY_URL_MAP') | align rate(1m) | every 1m | group_by [resource.backend_scope], [value_backend_request_count_aggregate: aggregate(value.backend_request_count)]
Replace
GATEWAY_URL_MAP
with the URLmap name from the previous step.Click Run query. Wait at least 5 minutes after deploying the load generator in the next section for the metrics to display in the chart.
Test with 10 RPS
Deploy a Pod to your
gke-west-1
cluster:kubectl run --context=gke-west-1 -i --tty --rm loadgen \ --image=cyrilbkr/httperf \ --restart=Never \ -- /bin/sh -c 'httperf \ --server=GATEWAY_IP_ADDRESS \ --hog --uri="/zone" --port 80 --wsess=100000,1,1 --rate 10'
Replace
GATEWAY_IP_ADDRESS
with the Gateway IP address from the previous step.The output is similar to the following, indicating that the traffic generator is sending traffic:
If you don't see a command prompt, try pressing enter.
The load generator continuously sends 10 RPS to the Gateway. Even though traffic is coming from inside a Google Cloud region, the load balancer treats it as client traffic coming from the US West Coast. To simulate realistic client diversity, the load generator sends each HTTP request as a new TCP connection, which means traffic is distributed across backend Pods more evenly.
The generator takes up to 5 minutes to generate traffic for the dashboard.
View your Metrics explorer dashboard. Two lines appear, indiciating how much traffic is load balanced to each of the clusters:
You should see that
us-west1-a
is receiving approximately 10 RPS of traffic whileus-east1-b
is not receiving any traffic. Because the traffic generator is running inus-west1
, all traffic is sent to the Service in thegke-west-1
cluster.Stop the load generator using Ctrl+C, then delete the pod:
kubectl delete pod loadgen --context=gke-west-1
Test with 30 RPS
Deploy the load generator again, but configured to send 30 RPS:
kubectl run --context=gke-west-1 -i --tty --rm loadgen \ --image=cyrilbkr/httperf \ --restart=Never \ -- /bin/sh -c 'httperf \ --server=GATEWAY_IP_ADDRESS \ --hog --uri="/zone" --port 80 --wsess=100000,1,1 --rate 30'
The generator takes up to 5 minutes to generate traffic for the dashboard.
View your Cloud Ops dashboard.
You should see that approximately 20 RPS is being sent to
us-west1-a
and 10 RPS tous-east1-b
. This indicates that the Service ingke-west-1
is fully utilized and is overflowing 10 RPS of traffic to the Service ingke-east-1
.Stop the load generator using Ctrl+C, then delete the Pod:
kubectl delete pod loadgen --context=gke-west-1
Test with 60 RPS
Deploy the load generator configured to send 60 RPS:
kubectl run --context=gke-west-1 -i --tty --rm loadgen \ --image=cyrilbkr/httperf \ --restart=Never \ -- /bin/sh -c 'httperf \ --server=GATEWAY_IP_ADDRESS \ --hog --uri="/zone" --port 80 --wsess=100000,1,1 --rate 60'
Wait 5 minutes and view your Cloud Ops dashboard. It should now show that both clusters are receiving roughly 30 RPS. Since all Services are overutilized globally, there is no traffic spillover and Services absorb all the traffic they can.
Stop the load generator using Ctrl+C, then delete the Pod:
kubectl delete pod loadgen --context=gke-west-1
Clean up
After completing the exercise, follow these steps to remove resources and prevent unwanted charges incurring on your account:
Unregister the clusters from the fleet if they don't need to be registered for another purpose.
Disable the
multiclusterservicediscovery
feature:gcloud container fleet multi-cluster-services disable
Disable Multi Cluster Ingress:
gcloud container fleet ingress disable
Disable the APIs:
gcloud services disable \ multiclusterservicediscovery.googleapis.com \ multiclusteringress.googleapis.com \ trafficdirector.googleapis.com \ --project=PROJECT_ID
What's next
- Learn more about the Gateway controller.