Set up a multi-cloud or hybrid mesh
This page explains how to set up a multi-cloud or hybrid mesh for the following platforms:
- Hybrid: GKE on Google Cloud and Google Distributed Cloud (preview)
- Hybrid: GKE on Google Cloud and Google Distributed Cloud (preview)
- Multi-cloud: GKE on Google Cloud and Amazon EKS (preview)
By following these instructions you set up two clusters, but you can extend this process to incorporate any number of clusters into your mesh.
Prerequisites
- All clusters must be registered to the same fleet host project.
- All GKE clusters must be in a shared VPC configuration on the same network.
- The cluster's Kubernetes control plane address and the gateway address need to be reachable from every cluster in the mesh. The Google Cloud project in which GKE clusters are located should be allowed to create external load balancing types. We recommend that you use authorized networks and VPC firewall rules to restrict the access.
- Private clusters, including GKE private clusters, are not supported. If you use On-Premises clusters including Google Distributed Cloud and Google Distributed Cloud, the Kubernetes control plane address and the gateway address need to be reachable from pods in GKE clusters. We recommend that you use CloudVPN to connect the GKE cluster's subnet with the On-Premises cluster's network.
- If you use Istio CA, use the same custom root certificate for all clusters.
Before you begin
This guide assumes you installed Cloud Service Mesh using the asmcli
tool. You need
asmcli
and the configuration package that asmcli
downloads to the directory
that you specified in --output_dir when you ran asmcli install
. For more
information, see
Install dependent tools and validate cluster
to:
- Install required tools
- Download asmcli
- Grant cluster admin permissions
- Validate your project and cluster
You need access to the kubeconfig files for all the clusters that you are
setting up in the mesh. For the GKE cluster, in order to create a
new kubeconfig file for the cluster, you can export KUBECONFIG
env with the
complete path of file as value in your terminal and generate the kubeconfig
entry.
Set up environment variables and placeholders
You need the following environment variables when you install the east-west gateway.
Create environment variables for the network names:
GKE clusters default to the cluster network name:
export NETWORK_1="PROJECT_ID-CLUSTER_NETWORK" ``````
Other clusters use
default
:export NETWORK_2="default" ``````
If you installed Cloud Service Mesh on other clusters with different values for
--network_id
, then you should pass the same values to value to NETWORK_2
.
Install the east-west gateway
Install a gateway in CLUSTER_1 (your GKE cluster) that is dedicated to east-west traffic to CLUSTER_2 (your multi-cloud or on-premise cluster):
asm/istio/expansion/gen-eastwest-gateway.sh \ --network ${NETWORK_1} \ --revision asm-1233-2 | \ ./istioctl --kubeconfig=PATH_TO_KUBECONFIG_1 install -y -f -
Note that this gateway is public on the Internet by default. Production systems might require additional access restrictions, for example firewall rules, to prevent external attacks.
Install a gateway in CLUSTER_2 that is dedicated to east-west traffic for CLUSTER_1.
asm/istio/expansion/gen-eastwest-gateway.sh \ --network ${NETWORK_2} \ --revision asm-1233-2 | \ ./istioctl --kubeconfig=PATH_TO_KUBECONFIG_2 install -y -f -
Expose services
Since the clusters are on separate networks, you need to expose all services
(\*.local
) on the east-west gateway in both clusters. While this gateway is
public on the internet, services behind it can only be accessed by services with
a trusted mTLS certificate and workload ID, just as if they were on the same
network.
Expose services via the east-west gateway for every cluster
kubectl --kubeconfig=PATH_TO_KUBECONFIG_1 apply -n istio-system -f \
asm/istio/expansion/expose-services.yaml
kubectl --kubeconfig=PATH_TO_KUBECONFIG_2 apply -n istio-system -f \
asm/istio/expansion/expose-services.yaml
Enable endpoint discovery
Run the asmcli create-mesh
command to enable endpoint discovery. This
example only shows two clusters, but you can run the command to enable
endpoint discovery on additional clusters, subject to the
GKE Hub service limit.
./asmcli create-mesh \
FLEET_PROJECT_ID \
PATH_TO_KUBECONFIG_1 \
PATH_TO_KUBECONFIG_2
Verify multicluster connectivity
This section explains how to deploy the sample HelloWorld
and Sleep
services
to your multi-cluster environment to verify that cross-cluster load
balancing works.
Enable sidecar injection
Locate the revision label value, which you use in later steps.
Use the following command to locate the revision label, which you will use in later steps.
kubectl -n istio-system get pods -l app=istiod --show-labels
The output looks similar to the following:
NAME READY STATUS RESTARTS AGE LABELS istiod-asm-173-3-5788d57586-bljj4 1/1 Running 0 23h app=istiod,istio.io/rev=asm-173-3,istio=istiod,pod-template-hash=5788d57586 istiod-asm-173-3-5788d57586-vsklm 1/1 Running 1 23h app=istiod,istio.io/rev=asm-173-3,istio=istiod,pod-template-hash=5788d57586
In the output, under the LABELS
column, note the value of the istiod
revision label, which follows the prefix istio.io/rev=
. In this example,
the value is asm-173-3
. Use the revision value in the steps in the next section.
Install the HelloWorld service
Create the sample namespace and the Service Definition in each cluster. In the following command, substitute REVISION with the
istiod
revision label that you noted from the previous step.for CTX in ${CTX_1} ${CTX_2} do kubectl create --context=${CTX} namespace sample kubectl label --context=${CTX} namespace sample \ istio-injection- istio.io/rev=REVISION --overwrite done
where REVISION is the
istiod
revision label that you previously noted.The output is:
label "istio-injection" not found. namespace/sample labeled
You can safely ignore
label "istio-injection" not found.
Create the HelloWorld service in both clusters:
kubectl create --context=${CTX_1} \ -f ${SAMPLES_DIR}/samples/helloworld/helloworld.yaml \ -l service=helloworld -n sample
kubectl create --context=${CTX_2} \ -f ${SAMPLES_DIR}/samples/helloworld/helloworld.yaml \ -l service=helloworld -n sample
Deploy HelloWorld v1 and v2 to each cluster
Deploy
HelloWorld v1
toCLUSTER_1
andv2
toCLUSTER_2
, which helps later to verify cross-cluster load balancing:kubectl create --context=${CTX_1} \ -f ${SAMPLES_DIR}/samples/helloworld/helloworld.yaml \ -l version=v1 -n sample
kubectl create --context=${CTX_2} \ -f ${SAMPLES_DIR}/samples/helloworld/helloworld.yaml \ -l version=v2 -n sample
Confirm
HelloWorld v1
andv2
are running using the following commands. Verify that the output is similar to that shown.:kubectl get pod --context=${CTX_1} -n sample
NAME READY STATUS RESTARTS AGE helloworld-v1-86f77cd7bd-cpxhv 2/2 Running 0 40s
kubectl get pod --context=${CTX_2} -n sample
NAME READY STATUS RESTARTS AGE helloworld-v2-758dd55874-6x4t8 2/2 Running 0 40s
Deploy the Sleep service
Deploy the
Sleep
service to both clusters. This pod generates artificial network traffic for demonstration purposes:for CTX in ${CTX_1} ${CTX_2} do kubectl apply --context=${CTX} \ -f ${SAMPLES_DIR}/samples/sleep/sleep.yaml -n sample done
Wait for the
Sleep
service to start in each cluster. Verify that the output is similar to that shown:kubectl get pod --context=${CTX_1} -n sample -l app=sleep
NAME READY STATUS RESTARTS AGE sleep-754684654f-n6bzf 2/2 Running 0 5s
kubectl get pod --context=${CTX_2} -n sample -l app=sleep
NAME READY STATUS RESTARTS AGE sleep-754684654f-dzl9j 2/2 Running 0 5s
Verify cross-cluster load balancing
Call the HelloWorld
service several times and check the output to verify
alternating replies from v1 and v2:
Call the
HelloWorld
service:kubectl exec --context="${CTX_1}" -n sample -c sleep \ "$(kubectl get pod --context="${CTX_1}" -n sample -l \ app=sleep -o jsonpath='{.items[0].metadata.name}')" \ -- /bin/sh -c 'for i in $(seq 1 20); do curl -sS helloworld.sample:5000/hello; done'
The output is similar to that shown:
Hello version: v2, instance: helloworld-v2-758dd55874-6x4t8 Hello version: v1, instance: helloworld-v1-86f77cd7bd-cpxhv ...
Call the
HelloWorld
service again:kubectl exec --context="${CTX_2}" -n sample -c sleep \ "$(kubectl get pod --context="${CTX_2}" -n sample -l \ app=sleep -o jsonpath='{.items[0].metadata.name}')" \ -- /bin/sh -c 'for i in $(seq 1 20); do curl -sS helloworld.sample:5000/hello; done'
The output is similar to that shown:
Hello version: v2, instance: helloworld-v2-758dd55874-6x4t8 Hello version: v1, instance: helloworld-v1-86f77cd7bd-cpxhv ...
Congratulations, you've verified your load-balanced, multi-cluster Cloud Service Mesh!
Clean up
When you finish verifying load balancing, remove the HelloWorld
and Sleep
service from your cluster.
kubectl delete ns sample --context ${CTX_1} kubectl delete ns sample --context ${CTX_2}