Set up an Envoy sidecar service mesh

This guide demonstrates how to configure a simple service mesh in your Fleet. The guide includes the following steps:

  • Deploying the Envoy sidecar injector into the cluster. The injector injects the Envoy proxy container into application Pods.
  • Deploying Gateway API resources that configure the Envoy sidecar in the service mesh to route requests to an example service in the namespace store.
  • Deploying a simple client to verify the deployment.

The following diagram shows the configured service mesh.

An Envoy sidecar service mesh in a Fleet
An Envoy sidecar service mesh in a Fleet (click to enlarge)

You can configure only one Mesh in a cluster, because the mesh name in the sidecar injector configuration and the Mesh resource's name must be identical.

Deploy the Envoy sidecar injector

To deploy the sidecar injector, you must supply two values.

  • TRAFFICDIRECTOR_GCP_PROJECT_NUMBER. Replace PROJECT_NUMBER with the project number of the project for your config cluster. The project number is the numeric identifier of your project.

  • TRAFFICDIRECTOR_MESH_NAME. Assign the value as follows, where MESH_NAME is the value of the field metadata.name in the Mesh resource spec:

    gketd-MESH_NAME
    

    For example, if the value of metadata.name in the Mesh resource is butterfly-mesh, set the value of TRAFFICDIRECTOR_MESH_NAME as follows:

    TRAFFICDIRECTOR_MESH_NAME: "gketd-butterfly-mesh"
    
  • TRAFFICDIRECTOR_NETWORK_NAME. Make sure that the value of TRAFFICDIRECTOR_NETWORK_NAME is set to empty:

    TRAFFICDIRECTOR_NETWORK_NAME=""
    
  1. Download the sidecar injector package:

    wget https://storage.googleapis.com/traffic-director/td-sidecar-injector-xdsv3.tgz
    tar -xzvf td-sidecar-injector-xdsv3.tgz
    cd td-sidecar-injector-xdsv3
    
  2. In the file specs/01-configmap.yaml, populate the fields TRAFFICDIRECTOR_GCP_PROJECT_NUMBER and TRAFFICDIRECTOR_MESH_NAME, and and set TRAFFICDIRECTOR_NETWORK_NAME to empty.

       apiVersion: v1
       kind: ConfigMap
       metadata:
         name: injector-mesh
         namespace: istio-control
       data:
         mesh: |-
           defaultConfig:
             discoveryAddress: trafficdirector.googleapis.com:443
    
             # Envoy proxy port to listen on for the admin interface.
             # This port is bound to 127.0.0.1.
             proxyAdminPort: 15000
    
             proxyMetadata:
               # Google Cloud Project number that your Fleet belongs to.
               # This is the numeric identifier of your project
               TRAFFICDIRECTOR_GCP_PROJECT_NUMBER: "PROJECT_NUMBER"
    
               # TRAFFICDIRECTOR_NETWORK_NAME must be empty when
               # TRAFFICDIRECTOR_MESH_NAME is set.
               TRAFFICDIRECTOR_NETWORK_NAME=""
    
               # The value of `metadata.name` in the `Mesh` resource. When a
               # sidecar requests configurations from Traffic Director,
               # Traffic Director will only return configurations for the
               # specified mesh.
               TRAFFICDIRECTOR_MESH_NAME: "gketd-td-mesh"
    

After you complete the previous instructions, following these steps to deploy the sidecar injector to your cluster:

  1. Configuring TLS for sidecar injector.
  2. Installing the sidecar injector to your GKE cluster.
  3. [Optional] Opening the required port on a private cluster.
  4. Enable sidecar injection.

Deploy the store service

In this section, you deploy the store service in the mesh.

  1. In the store.yaml file, save the following manifest:

    kind: Namespace
    apiVersion: v1
    metadata:
      name: store
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: store
      namespace: store
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: store
          version: v1
      template:
        metadata:
          labels:
            app: store
            version: v1
        spec:
          containers:
          - name: whereami
            image: gcr.io/google-samples/whereami:v1.2.20
            ports:
            - containerPort: 8080
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: store
      namespace: store
    spec:
      selector:
        app: store
      ports:
      - port: 8080
        targetPort: 8080
    
  2. Apply the manifest to gke-1:

    kubectl apply -f store.yaml
    

Create a service mesh

  1. In the mesh.yaml file, save the following mesh manifest. The name of the mesh resource needs to match with the mesh name specified in the injector configmap. In this example configuration, the name td-mesh is used in both places:

    apiVersion: net.gke.io/v1alpha1
    kind: TDMesh
    metadata:
      name: td-mesh
      namespace: default
    spec:
      gatewayClassName: gke-td
      allowedRoutes:
        namespaces:
          from: All
    
  2. Apply the mesh manifest to gke-1, which creates a logical mesh with the name td-mesh:

    kubectl apply -f mesh.yaml
    
  3. In the store-route.yaml file, save the following HTTPRoute manifest. The manifest defines an HTTPRoute resource that routes HTTP traffic specifying the hostname example.com to a Kubernetes service store in the namespace store:

    apiVersion: gateway.networking.k8s.io/v1alpha2
    kind: HTTPRoute
    metadata:
      name: store-route
      namespace: store
    spec:
      parentRefs:
      - name: td-mesh
        namespace: default
        group: net.gke.io
        kind: TDMesh
      hostnames:
      - "example.com"
      rules:
      - backendRefs:
        - name: store
          namespace: store
          port: 8080
    
  4. Apply the route manifest to gke-1:

    kubectl apply -f store-route.yaml
    

Validate the deployment

  1. Inspect the Mesh status and events to validate that the Mesh and HTTPRoute resources are successfully deployed:

    kubectl describe tdmesh td-mesh
    

    The output is similar to the following:

    ...
    
    Status:
      Conditions:
        Last Transition Time:  2022-04-14T22:08:39Z
        Message:
        Reason:                MeshReady
        Status:                True
        Type:                  Ready
        Last Transition Time:  2022-04-14T22:08:28Z
        Message:
        Reason:                Scheduled
        Status:                True
        Type:                  Scheduled
    Events:
      Type    Reason  Age   From                Message
      ----    ------  ----  ----                -------
      Normal  ADD     36s   mc-mesh-controller  Processing mesh default/td-mesh
      Normal  UPDATE  35s   mc-mesh-controller  Processing mesh default/td-mesh
      Normal  SYNC    24s   mc-mesh-controller  SYNC on default/td-mesh was a success
    
  2. To make sure that sidecar injection is enabled in the default namespace, run the following command:

    kubectl get namespace default --show-labels
    

    If sidecar injection is enabled, you see the following in the output:

    istio-injection=enabled
    

    If sidecar injection is not enabled, see Enable sidecar injections.

  3. To verify the deployment, deploy a client Pod that serves as a client to the store service defined previously. In the client.yaml file, save the following:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        run: client
      name: client
      namespace: default
    spec:
      replicas: 1
      selector:
        matchLabels:
          run: client
      template:
        metadata:
          labels:
            run: client
        spec:
          containers:
          - name: client
            image: curlimages/curl
            command:
            - sh
            - -c
            - while true; do sleep 1; done
    
  4. Deploy the spec:

    kubectl apply -f client.yaml
    

    The sidecar injector running in the cluster automatically injects an Envoy container into the client Pod.

  5. To verify that the Envoy container is injected, run the following command:

    kubectl describe pods -l run=client
    

    The output is similar to the following:

    ...
    Init Containers:
      # Istio-init sets up traffic interception for the Pod.
      istio-init:
    ...
      # td-bootstrap-writer generates the Envoy bootstrap file for the Envoy container
      td-bootstrap-writer:
    ...
    Containers:
    # client is the client container that runs application code.
      client:
    ...
    # Envoy is the container that runs the injected Envoy proxy.
      envoy:
    ...
    

After the client Pod is provisioned, send a request from the client Pod to the store service.

  1. Get the name of the client Pod:

    CLIENT_POD=$(kubectl get pod -l run=client -o=jsonpath='{.items[0].metadata.name}')
    
    # The VIP where the following request will be sent. Because all requests
    # from the client container are redirected to the Envoy proxy sidecar, you
    # can use any IP address, including 10.0.0.2, 192.168.0.1, and others.
    VIP='10.0.0.1'
    
  2. Send a request to store service and output the response headers:

    TEST_CMD="curl -v -H 'host: example.com' $VIP"
    
  3. Execute the test command in the client container:

    kubectl exec -it $CLIENT_POD -c client -- /bin/sh -c "$TEST_CMD"
    

    The output is similar to the following:

    < Trying 10.0.0.1:80...
    < Connected to 10.0.0.1 (10.0.0.1) port 80 (#0)
    < GET / HTTP/1.1
    < Host: example.com
    < User-Agent: curl/7.82.0-DEV
    < Accept: */*
    <
    < Mark bundle as not supporting multiuse
    < HTTP/1.1 200 OK
    < content-type: application/json
    < content-length: 318
    < access-control-allow-origin: *
    < server: envoy
    < date: Tue, 12 Apr 2022 22:30:13 GMT
    <
    {
      "cluster_name": "gke-1",
      "zone": "us-west1-a",
      "host_header": "example.com",
      ...
    }
    

What's next