设置 Envoy Sidecar 服务网格
此配置适用于预览版客户,但我们不建议新 Cloud Service Mesh 用户采用此配置。有关详情,请参阅 Cloud Service Mesh 概览。
本指南演示了如何在舰队中配置简单的服务网格。本指南包括以下步骤:
- 将 Envoy Sidecar 注入器部署到集群中。注入器会将 Envoy 代理容器注入到应用 Pod。
- 部署 Gateway API 资源,用于在服务网格中配置 Envoy Sidecar,以将请求路由到命名空间
store
中的示例服务。 - 部署一个简单的客户端以验证部署。
下图展示了已配置的服务网格。
您只能在集群中配置一个 Mesh
,因为 Sidecar 注入器配置中的网格名称和 Mesh
资源的名称必须相同。
部署 Envoy Sidecar 注入器
如需部署 Sidecar 注入器,您必须提供两个值。
TRAFFICDIRECTOR_GCP_PROJECT_NUMBER
。将PROJECT_NUMBER
替换为配置集群所属项目的项目编号。项目编号是您的项目的数字标识符。TRAFFICDIRECTOR_MESH_NAME
。按如下方式指定值,其中MESH_NAME
是Mesh
资源规范中的字段metadata.name
的值:gketd-MESH_NAME
例如,如果
Mesh
资源中的metadata.name
的值为butterfly-mesh
,请按如下方式设置TRAFFICDIRECTOR_MESH_NAME
的值:TRAFFICDIRECTOR_MESH_NAME: "gketd-butterfly-mesh"
TRAFFICDIRECTOR_NETWORK_NAME
。请务必将TRAFFICDIRECTOR_NETWORK_NAME
的值设置为空:TRAFFICDIRECTOR_NETWORK_NAME=""
下载 Sidecar 注入器软件包:
wget https://storage.googleapis.com/traffic-director/td-sidecar-injector-xdsv3.tgz tar -xzvf td-sidecar-injector-xdsv3.tgz cd td-sidecar-injector-xdsv3
在
specs/01-configmap.yaml
文件中,填充字段TRAFFICDIRECTOR_GCP_PROJECT_NUMBER
和TRAFFICDIRECTOR_MESH_NAME
,并将TRAFFICDIRECTOR_NETWORK_NAME
设置为空。apiVersion: v1 kind: ConfigMap metadata: name: istio namespace: istio-system data: mesh: |- defaultConfig: discoveryAddress: trafficdirector.googleapis.com:443 # Envoy proxy port to listen on for the admin interface. # This port is bound to 127.0.0.1. proxyAdminPort: 15000 proxyMetadata: # Google Cloud Project number that your Fleet belongs to. # This is the numeric identifier of your project TRAFFICDIRECTOR_GCP_PROJECT_NUMBER: "PROJECT_NUMBER" # TRAFFICDIRECTOR_NETWORK_NAME must be empty when # TRAFFICDIRECTOR_MESH_NAME is set. TRAFFICDIRECTOR_NETWORK_NAME: "NETWORK_NAME" # The value of `metadata.name` in the `Mesh` resource. When a # sidecar requests configurations from Cloud Service Mesh, # Cloud Service Mesh will only return configurations for the # specified mesh. TRAFFICDIRECTOR_MESH_NAME: "gketd-td-mesh"
完成上述步骤后,请按照以下步骤将 Sidecar 注入器部署到集群:
部署 store
服务
在本部分中,您将在网格中部署 store
服务。
在
store.yaml
文件中,保存以下清单:kind: Namespace apiVersion: v1 metadata: name: store --- apiVersion: apps/v1 kind: Deployment metadata: name: store namespace: store spec: replicas: 2 selector: matchLabels: app: store version: v1 template: metadata: labels: app: store version: v1 spec: containers: - name: whereami image: gcr.io/google-samples/whereami:v1.2.20 ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: store namespace: store spec: selector: app: store ports: - port: 8080 targetPort: 8080
将清单应用于
gke-1
:kubectl apply -f store.yaml
创建服务网格
在
mesh.yaml
文件中,保存以下mesh
清单。mesh
资源的名称需要与注入器 configmap 中指定的网格名称匹配。在此示例配置中,这两个位置都使用名称td-mesh
:apiVersion: net.gke.io/v1alpha1 kind: TDMesh metadata: name: td-mesh namespace: default spec: gatewayClassName: gke-td allowedRoutes: namespaces: from: All
将
mesh
清单应用于gke-1
,这将创建一个名为td-mesh
的逻辑网格:kubectl apply -f mesh.yaml
在
store-route.yaml
文件中,保存以下HTTPRoute
清单。该清单定义了一个HTTPRoute
资源,用于将指定主机名example.com
的 HTTP 流量路由到命名空间store
中的 Kubernetes 服务store
:apiVersion: gateway.networking.k8s.io/v1alpha2 kind: HTTPRoute metadata: name: store-route namespace: store spec: parentRefs: - name: td-mesh namespace: default group: net.gke.io kind: TDMesh hostnames: - "example.com" rules: - backendRefs: - name: store namespace: store port: 8080
将路由清单应用于
gke-1
:kubectl apply -f store-route.yaml
验证部署
检查
Mesh
状态和事件,以验证Mesh
和HTTPRoute
资源是否已成功部署:kubectl describe tdmesh td-mesh
输出内容类似如下:
... Status: Conditions: Last Transition Time: 2022-04-14T22:08:39Z Message: Reason: MeshReady Status: True Type: Ready Last Transition Time: 2022-04-14T22:08:28Z Message: Reason: Scheduled Status: True Type: Scheduled Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ADD 36s mc-mesh-controller Processing mesh default/td-mesh Normal UPDATE 35s mc-mesh-controller Processing mesh default/td-mesh Normal SYNC 24s mc-mesh-controller SYNC on default/td-mesh was a success
如需确保已在默认命名空间中启用 Sidecar 注入,请运行以下命令:
kubectl get namespace default --show-labels
如果启用了 Sidecar 注入,则输出中会显示以下内容:
istio-injection=enabled
如果未启用 Sidecar 注入,请参阅启用 Sidecar 注入。
如需验证部署,请将用作客户端的客户端 Pod 部署到之前定义的
store
服务。在client.yaml
文件中,保存以下内容:apiVersion: apps/v1 kind: Deployment metadata: labels: run: client name: client namespace: default spec: replicas: 1 selector: matchLabels: run: client template: metadata: labels: run: client spec: containers: - name: client image: curlimages/curl command: - sh - -c - while true; do sleep 1; done
部署规范:
kubectl apply -f client.yaml
在集群中运行的 Sidecar 注入器会将 Envoy 容器自动注入客户端 Pod。
如需验证 Envoy 容器是否已注入,请运行以下命令:
kubectl describe pods -l run=client
输出内容类似如下:
... Init Containers: # Istio-init sets up traffic interception for the Pod. istio-init: ... # td-bootstrap-writer generates the Envoy bootstrap file for the Envoy container td-bootstrap-writer: ... Containers: # client is the client container that runs application code. client: ... # Envoy is the container that runs the injected Envoy proxy. envoy: ...
预配客户端 Pod 后,从客户端 Pod 向 store
服务发送请求。
获取客户端 pod 的名称:
CLIENT_POD=$(kubectl get pod -l run=client -o=jsonpath='{.items[0].metadata.name}') # The VIP where the following request will be sent. Because all requests # from the client container are redirected to the Envoy proxy sidecar, you # can use any IP address, including 10.0.0.2, 192.168.0.1, and others. VIP='10.0.0.1'
向存储服务发送请求并输出响应标头:
TEST_CMD="curl -v -H 'host: example.com' $VIP"
在客户端容器中执行测试命令:
kubectl exec -it $CLIENT_POD -c client -- /bin/sh -c "$TEST_CMD"
输出内容类似如下:
< Trying 10.0.0.1:80... < Connected to 10.0.0.1 (10.0.0.1) port 80 (#0) < GET / HTTP/1.1 < Host: example.com < User-Agent: curl/7.82.0-DEV < Accept: */* < < Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < content-type: application/json < content-length: 318 < access-control-allow-origin: * < server: envoy < date: Tue, 12 Apr 2022 22:30:13 GMT < { "cluster_name": "gke-1", "zone": "us-west1-a", "host_header": "example.com", ... }