设置 Envoy Sidecar 服务网格
此配置适用于预览版客户,但我们不建议新 Cloud Service Mesh 用户采用此配置。如需了解详情,请参阅 Cloud Service Mesh 概览。
本指南演示了如何在舰队中配置简单的服务网格。本指南包括以下步骤:
- 将 Envoy Sidecar 注入器部署到集群中。注入器会将 Envoy 代理容器注入到应用 Pod。
- 部署 Gateway API 资源,用于在服务网格中配置 Envoy Sidecar,以将请求路由到命名空间
store
中的示例服务。 - 部署一个简单的客户端以验证部署。
下图展示了已配置的服务网格。
您只能在集群中配置一个 Mesh
,因为 Sidecar 注入器配置中的网格名称和 Mesh
资源的名称必须相同。
部署 Envoy Sidecar 注入器
如需部署 Sidecar 注入器,请执行以下操作:
配置项目信息
# The project that contains your GKE cluster. export CLUSTER_PROJECT_ID=YOUR_CLUSTER_PROJECT_NUMBER_HERE # The name of your GKE cluster. export CLUSTER=YOUR_CLUSTER_NAME # The channel of your GKE cluster. Eg: rapid, regular, stable. export CHANNEL=YOUR_CLUSTER_CHANNEL # The location of your GKE cluster, Eg: us-central1 for regional GKE cluster, # us-central1-a for zonal GKE cluster export LOCATION=ZONE # The mesh name of the traffic director load balancing API. export MESH_NAME=YOUR_MESH_NAME # The project that holds the mesh resources. export MESH_PROJECT_NUMBER=YOUR_PROJECT_NUMBER_HERE export TARGET=projects/${MESH_PROJECT_NUMBER}/locations/global/meshes/${MESH_NAME} gcloud config set project ${CLUSTER_PROJECT_ID}
如需查找
MESH_NAME
,请按如下方式指定值,其中MESH_NAME
是Mesh
资源规范中的字段metadata.name
的值:gketd-MESH_NAME
例如,如果
Mesh
资源中的metadata.name
的值为butterfly-mesh
,请按如下方式设置MESH_NAME
的值:export MESH_NAME="gketd-butterfly-mesh"
应用用于更新型网络钩子的配置
以下部分提供了将 MutatingWebhookConfiguration 应用于集群的说明。创建 Pod 时,系统会调用集群内准入控制器。准入控制器会与托管的 Sidecar 注入器通信,以将 Envoy 容器添加到 pod。
将以下更改型网络钩子配置应用于您的集群。
cat <<EOF | kubectl apply -f - apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration metadata: labels: app: sidecar-injector name: td-mutating-webhook webhooks: - admissionReviewVersions: - v1beta1 - v1 clientConfig: url: https://meshconfig.googleapis.com/v1internal/projects/${CLUSTER_PROJECT_ID}/locations/${LOCATION}/clusters/${CLUSTER}/channels/${CHANNEL}/targets/${TARGET}:tdInject failurePolicy: Fail matchPolicy: Exact name: namespace.sidecar-injector.csm.io namespaceSelector: matchExpressions: - key: td-injection operator: Exists reinvocationPolicy: Never rules: - apiGroups: - "" apiVersions: - v1 operations: - CREATE resources: - pods scope: '*' sideEffects: None timeoutSeconds: 30 EOF
如果您需要自定义 Sidecar 注入器,请按照以下步骤为您的集群自定义 Sidecar 注入器:
部署 store
服务
在本部分中,您将在网格中部署 store
服务。
在
store.yaml
文件中,保存以下清单:kind: Namespace apiVersion: v1 metadata: name: store --- apiVersion: apps/v1 kind: Deployment metadata: name: store namespace: store spec: replicas: 2 selector: matchLabels: app: store version: v1 template: metadata: labels: app: store version: v1 spec: containers: - name: whereami image: us-docker.pkg.dev/google-samples/containers/gke/whereami:v1 ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: store namespace: store spec: selector: app: store ports: - port: 8080 targetPort: 8080
将清单应用于
gke-1
:kubectl apply -f store.yaml
创建服务网格
在
mesh.yaml
文件中,保存以下mesh
清单。mesh
资源的名称需要与注入器 configmap 中指定的网格名称匹配。在此示例配置中,这两个位置都使用名称td-mesh
:apiVersion: net.gke.io/v1alpha1 kind: TDMesh metadata: name: td-mesh namespace: default spec: gatewayClassName: gke-td allowedRoutes: namespaces: from: All
将
mesh
清单应用于gke-1
,这将创建一个名为td-mesh
的逻辑网格:kubectl apply -f mesh.yaml
在
store-route.yaml
文件中,保存以下HTTPRoute
清单。该清单定义了一个HTTPRoute
资源,用于将指定主机名example.com
的 HTTP 流量路由到命名空间store
中的 Kubernetes 服务store
:apiVersion: gateway.networking.k8s.io/v1alpha2 kind: HTTPRoute metadata: name: store-route namespace: store spec: parentRefs: - name: td-mesh namespace: default group: net.gke.io kind: TDMesh hostnames: - "example.com" rules: - backendRefs: - name: store namespace: store port: 8080
将路由清单应用于
gke-1
:kubectl apply -f store-route.yaml
验证部署
检查
Mesh
状态和事件,以验证Mesh
和HTTPRoute
资源是否已成功部署:kubectl describe tdmesh td-mesh
输出内容类似如下:
... Status: Conditions: Last Transition Time: 2022-04-14T22:08:39Z Message: Reason: MeshReady Status: True Type: Ready Last Transition Time: 2022-04-14T22:08:28Z Message: Reason: Scheduled Status: True Type: Scheduled Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ADD 36s mc-mesh-controller Processing mesh default/td-mesh Normal UPDATE 35s mc-mesh-controller Processing mesh default/td-mesh Normal SYNC 24s mc-mesh-controller SYNC on default/td-mesh was a success
如需确保已在默认命名空间中启用 Sidecar 注入,请运行以下命令:
kubectl get namespace default --show-labels
如果启用了 Sidecar 注入,则输出中会显示以下内容:
istio-injection=enabled
如果未启用 Sidecar 注入,请参阅启用 Sidecar 注入。
如需验证部署,请将用作客户端的客户端 Pod 部署到之前定义的
store
服务。在client.yaml
文件中,保存以下内容:apiVersion: apps/v1 kind: Deployment metadata: labels: run: client name: client namespace: default spec: replicas: 1 selector: matchLabels: run: client template: metadata: labels: run: client spec: containers: - name: client image: curlimages/curl command: - sh - -c - while true; do sleep 1; done
部署规范:
kubectl apply -f client.yaml
在集群中运行的 Sidecar 注入器会将 Envoy 容器自动注入客户端 Pod。
如需验证 Envoy 容器是否已注入,请运行以下命令:
kubectl describe pods -l run=client
输出内容类似如下:
... Init Containers: # Istio-init sets up traffic interception for the Pod. istio-init: ... # td-bootstrap-writer generates the Envoy bootstrap file for the Envoy container td-bootstrap-writer: ... Containers: # client is the client container that runs application code. client: ... # Envoy is the container that runs the injected Envoy proxy. envoy: ...
预配客户端 Pod 后,从客户端 Pod 向 store
服务发送请求。
获取客户端 pod 的名称:
CLIENT_POD=$(kubectl get pod -l run=client -o=jsonpath='{.items[0].metadata.name}') # The VIP where the following request will be sent. Because all requests # from the client container are redirected to the Envoy proxy sidecar, you # can use any IP address, including 10.0.0.2, 192.168.0.1, and others. VIP='10.0.0.1'
向存储服务发送请求并输出响应标头:
TEST_CMD="curl -v -H 'host: example.com' $VIP"
在客户端容器中执行测试命令:
kubectl exec -it $CLIENT_POD -c client -- /bin/sh -c "$TEST_CMD"
输出内容类似如下:
< Trying 10.0.0.1:80... < Connected to 10.0.0.1 (10.0.0.1) port 80 (#0) < GET / HTTP/1.1 < Host: example.com < User-Agent: curl/7.82.0-DEV < Accept: */* < < Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < content-type: application/json < content-length: 318 < access-control-allow-origin: * < server: envoy < date: Tue, 12 Apr 2022 22:30:13 GMT < { "cluster_name": "gke-1", "zone": "us-west1-a", "host_header": "example.com", ... }