網格中的每個叢集都必須能連上叢集的 Kubernetes 控制層位址和閘道位址。您應允許 GKE 叢集所在的 Google Cloud 專案建立外部負載平衡類型。建議您使用授權網路和 VPC 防火牆規則來限制存取權。
系統不支援私人叢集,包括 GKE 私人叢集。如果您使用內部部署叢集,包括 Google Distributed Cloud (僅限軟體) for VMware 和 Google Distributed Cloud (僅限軟體) for Bare Metal,則 Kubernetes 控制層位址和閘道位址必須可從 GKE 叢集中的 Pod 存取。建議您使用 CloudVPN,將 GKE 叢集的子網路連線至地端部署叢集的網路。
[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-04 (世界標準時間)。"],[],[],null,["# Set up a hybrid mesh\n====================\n\n|\n| **Preview\n| --- Hybrid and mesh**\n|\n|\n| This feature is subject to the \"Pre-GA Offerings Terms\" in the General Service Terms section\n| of the [Service Specific Terms](/terms/service-terms#1).\n|\n| Pre-GA features are available \"as is\" and might have limited support.\n|\n| For more information, see the\n| [launch stage descriptions](/products#product-launch-stages).\n| **Note:** This guide only supports Cloud Service Mesh with Istio APIs and does not support Google Cloud APIs. For more information see, [Cloud Service Mesh overview](/service-mesh/v1.24/docs/overview).\n\nThis page explains how to set up a hybrid mesh for the\nfollowing platforms:\n\n- Hybrid: GKE on Google Cloud and Google Distributed Cloud (software only) for VMware\n- Hybrid: GKE on Google Cloud and Google Distributed Cloud (software only) for bare metal\n\nBy following these instructions you set up two clusters, but you can extend this\nprocess to incorporate any number of clusters into your mesh.\n\nPrerequisites\n-------------\n\n- All clusters must be registered to the same [fleet host project](/anthos/multicluster-management/fleets#fleet-host-project).\n- All GKE clusters must be in a [shared VPC](/vpc/docs/shared-vpc) configuration on the same network.\n- The cluster's Kubernetes control plane address and the gateway address need to be reachable from every cluster in the mesh. The Google Cloud project in which GKE clusters are located should be allowed to create [external load balancing types](/load-balancing/docs/org-policy-constraints). We recommend that you use [authorized networks](/kubernetes-engine/docs/how-to/authorized-networks) and [VPC firewall rules](/vpc/docs/using-firewalls) to restrict the access.\n- Private clusters, including GKE private clusters, are not supported. If you use On-Premises clusters including Google Distributed Cloud (software only) for VMware and Google Distributed Cloud (software only) for bare metal, the Kubernetes control plane address and the gateway address need to be reachable from pods in GKE clusters. We recommend that you use [CloudVPN](/network-connectivity/docs/vpn/concepts/overview) to connect the GKE cluster's subnet with the On-Premises cluster's network.\n- If you use Istio CA, use the same custom root certificate for all clusters.\n\nBefore you begin\n----------------\n\nYou need access to the kubeconfig files for all the clusters that you are\nsetting up in the mesh. For the GKE cluster, in order to create a\nnew kubeconfig file for the cluster, you can export `KUBECONFIG` env with the\ncomplete path of file as value in your terminal and generate the kubeconfig\nentry.\n| **Warning:** Only use kubeconfig files from trusted sources. Using a specially-crafted kubeconfig file could result in malicious code execution or file exposure. If you must use an untrusted kubeconfig file, inspect it carefully first, much as you would a shell script.\n\n### Set up environment variables and placeholders\n\nYou need the following environment variables when you install the\neast-west gateway.\n\n1. Create an environment variable for the project number. In the following\n command, replace \u003cvar translate=\"no\"\u003eFLEET_PROJECT_ID\u003c/var\u003e with the project ID of the\n [fleet host project](/anthos/multicluster-management/fleets#fleet-host-project).\n\n export PROJECT_NUMBER=$(gcloud projects describe \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-n\"\u003eFLEET_PROJECT_ID\u003c/span\u003e\u003c/var\u003e --format=\"value(projectNumber)\")\n\n2. Create an environment variable for the mesh identifier.\n\n export MESH_ID=\"proj-${PROJECT_NUMBER}\"\n\n3. Create environment variables for the network names.\n\n - GKE clusters default to the cluster network name:\n\n export NETWORK_1=\"PROJECT_ID-CLUSTER_NETWORK\"\n - Other clusters use `default`:\n\n export NETWORK_2=\"default\"\n\n Note that If you installed Cloud Service Mesh on other clusters\n with different values for `--network_id`, then you should pass the same\n values to value to NETWORK_2.\n\nInstall the east-west gateway\n-----------------------------\n\n1. Install a gateway in CLUSTER_1 (your GKE cluster) that is\n dedicated to [east-west](https://en.wikipedia.org/wiki/East-west_traffic)\n traffic to CLUSTER_2 (your on-premise cluster):\n\n asm/istio/expansion/gen-eastwest-gateway.sh \\\n --mesh ${MESH_ID} \\\n --network ${NETWORK_1} \\\n --revision asm-1246-9 | \\\n ./istioctl --kubeconfig=\u003cvar translate=\"no\"\u003ePATH_TO_KUBECONFIG_1\u003c/var\u003e install -y -f -\n\n Note that this gateway is public on the Internet by default. Production\n systems might require additional access restrictions, for example firewall\n rules, to prevent external attacks.\n2. Install a gateway in CLUSTER_2 that is dedicated to east-west traffic for\n CLUSTER_1.\n\n asm/istio/expansion/gen-eastwest-gateway.sh \\\n --mesh ${MESH_ID} \\\n --network ${NETWORK_2} \\\n --revision asm-1246-9 | \\\n ./istioctl --kubeconfig=\u003cvar translate=\"no\"\u003ePATH_TO_KUBECONFIG_2\u003c/var\u003e install -y -f -\n\nExpose services\n---------------\n\nSince the clusters are on separate networks, you need to expose all services\n(`\\*.local`) on the east-west gateway in both clusters. While this gateway is\npublic on the internet, services behind it can only be accessed by services with\na trusted mTLS certificate and workload ID, just as if they were on the same\nnetwork.\n\nExpose services via the east-west gateway for every cluster \n\n kubectl --kubeconfig=\u003cvar translate=\"no\"\u003ePATH_TO_KUBECONFIG_1\u003c/var\u003e apply -n istio-system -f \\\n asm/istio/expansion/expose-services.yaml\n kubectl --kubeconfig=\u003cvar translate=\"no\"\u003ePATH_TO_KUBECONFIG_2\u003c/var\u003e apply -n istio-system -f \\\n asm/istio/expansion/expose-services.yaml\n\nEnable endpoint discovery\n-------------------------\n\n| **Note:** For more information on endpoint discovery, refer to [Endpoint discovery with multiple control planes](https://istio.io/v1.24/docs/ops/deployment/deployment-models/#endpoint-discovery-with-multiple-control-planes).\n\nRun the `asmcli create-mesh` command to enable endpoint discovery. This\nexample only shows two clusters, but you can run the command to enable\nendpoint discovery on additional clusters, subject to the\n[GKE Hub service limit](/anthos/fleet-management/docs/quotas). \n\n ./asmcli create-mesh \\\n \u003cvar label=\"fleetid\" translate=\"no\"\u003eFLEET_PROJECT_ID\u003c/var\u003e \\\n \u003cvar label=\"kubeconfig1\" translate=\"no\"\u003ePATH_TO_KUBECONFIG_1\u003c/var\u003e \\\n \u003cvar label=\"kubeconfig2\" translate=\"no\"\u003ePATH_TO_KUBECONFIG_2\u003c/var\u003e\n\nVerify multi-cluster connectivity\n---------------------------------\n\nSee [Injecting sidecar proxies](/service-mesh/v1.24/docs/onboarding/kubernetes-workloads#inject_sidecar_proxies)."]]