From edge to mesh: Exposing service mesh applications through GKE Ingress

This tutorial shows how to combine Anthos Service Mesh with Cloud Load Balancing to expose applications in a service mesh to internet clients.

Anthos Service Mesh is a managed service mesh, based on Istio, that provides a security-enhanced, observable, and standardized communication layer for applications. Whether you use Anthos Service Mesh, Traffic Director, or Istio, a service mesh provides a holistic communications platform for clients that are communicating in the mesh. However, a challenge remains in how to connect clients that are outside the mesh to applications hosted in the mesh.

You can expose an application to clients in many ways, depending on where the client is. This tutorial shows you how to expose an application to clients by combining Cloud Load Balancing with Anthos Service Mesh in order to integrate load balancers with a service mesh. This tutorial is intended for advanced practitioners who run Anthos Service Mesh or Istio on Google Kubernetes Engine.

Mesh ingress gateway

Istio 0.8 introduced the mesh ingress gateway that provides a dedicated set of proxies whose ports are exposed to traffic coming from outside the service mesh. These mesh ingress proxies let you control L4 exposure behavior separately from application routing behavior. The proxies also let you apply routing and policy to mesh-external traffic before it arrives at an application sidecar. Mesh ingress defines the treatment of traffic when it reaches a node in the mesh, but external components must define how traffic first arrives at the mesh.

To manage this external traffic, you need a load balancer that is external to the mesh. This tutorial uses Google Cloud Load Balancing provisioned through GKE Ingress resources to automate deployment. The canonical example of this setup is an external load balancing service that (in the case of Google Cloud) deploys a public TCP/UDP load balancer. That load balancer points at the NodePorts of a GKE cluster. These NodePorts expose the Istio ingress gateway Pods, which route traffic to downstream mesh sidecar proxies. The following diagram illustrates this topology. Load balancing for internal private traffic looks similar to this architecture, except that you deploy an internal TCP/UDP load balancer instead.

An external load balancer routes external clients to the mesh through ingress gateway proxies.

Using L4 transparent load balancing with a mesh ingress gateway offers the following advantages:

  • This setup simplifies deploying the load balancer.
  • The load balancer provides a stable virtual IP (VIP), health checking, and reliable traffic distribution when cluster changes, node outages, or process outages occur.
  • All routing rules, TLS termination, and traffic policy is handled in a single location at the mesh ingress gateway.

GKE Ingress and Services

You can provide access to applications for clients that are outside the cluster in many ways. The following table lists the Kubernetes primitives available for deploying load balancers on Google Cloud. The type of load balancer you use to expose applications to clients depends largely on whether the clients are external or internal, what kind of protocol support is required, and whether the service mesh spans multiple GKE clusters or is contained in a single cluster.

Any of the load balancer types in the following table can expose mesh-hosted applications, depending on the use case.

GKE resource Cloud-based load balancer Characteristics
Ingress for External HTTP(S) Load Balancing External HTTP(S) load balancer

L7 proxies in Google edge points of presence (PoPs)

Public VIP

Globally scoped

Single cluster

Ingress for Internal HTTP(S) Load Balancing Internal HTTP(S) load balancer

L7 proxies inside virtual private cloud (VPC) network

Private VIP

Regionally scoped

Single cluster

External LoadBalancer Service External TCP/UDP load balancer

L4 pass-through at Google edge PoPs

Public VIP

Regionally scoped

Single cluster

Internal LoadBalancer Service Internal TCP/UDP load balancer

L4 pass-through in VPC routing network

Private VIP

Regionally scoped

Single cluster

Ingress for Anthos (multi-cluster, external ingress) External HTTP(S) load balancer

L7 proxies in Google edge PoPs

Public VIP

Globally scoped

Multi-cluster

Although the default load balancer for Anthos Service Mesh is the external TCP/UDP load balancer, this tutorial focuses on the external HTTP(S) load balancer that you deploy by using External HTTP(S) Load Balancing. The external HTTP(S) load balancer provides integration with edge services like Identity-Aware Proxy (IAP), Google Cloud Armor, and Cloud CDN, as well as a globally distributed network of edge proxies. The next section describes the architecture and advantages of using two layers of HTTP load balancing.

Cloud ingress and mesh ingress

Deploying external L7 load balancing outside of the mesh along with a mesh ingress layer offers significant advantages, especially for internet traffic. Even though Anthos Service Mesh and Istio ingress gateways provide advanced routing and traffic management in the mesh, some functions are better served at the edge of the network. Taking advantage of internet-edge networking through Google Cloud's External HTTP(S) Load Balancing might provide significant performance, reliability, or security-related benefits over mesh-based ingress. These benefits include the following:

This external layer of L7 load balancing is referred to as cloud ingress because it is built on cloud-managed load balancers rather than the self-hosted proxies that are used by mesh ingress. The combination of cloud ingress and mesh ingress utilizes complementary capabilities of the Google Cloud infrastructure and the mesh. The following diagram illustrates how you can combine cloud ingress and mesh ingress to serve as two load balancing layers for internet traffic.

Cloud ingress acts as the gateway for external traffic to the mesh through the VPC network.

In this topology, the cloud ingress layer sources traffic from outside of the service mesh and directs that traffic to the mesh ingress layer. The mesh ingress layer then directs traffic to the mesh-hosted application backends.

Cloud and mesh ingress topology

This section describes the complementary roles that each ingress layer fulfills when you use them together. These roles are not concrete rules, but rather guidelines that use the advantages of each layer. Variations of this pattern are likely, depending on your use case.

  • Cloud ingress. When paired with mesh ingress, the cloud ingress layer is best used for edge security and global load balancing. Because the cloud ingress layer is integrated with DDoS protection, cloud firewalls, authentication, and encryption products at the edge, this layer excels at running these services outside of the mesh. The routing logic is typically straightforward at this layer, but the logic can be more complex for multi-cluster and multi-region environments. Because of the critical function of internet-facing load balancers, the cloud ingress layer is likely managed by an infrastructure team that has exclusive control over how applications are exposed and secured on the internet. This control also makes this layer less flexible and dynamic than a developer-driven infrastructure, a consideration that may impact who and how you provide administrative access to this layer.
  • Mesh ingress. When paired with cloud ingress, the mesh ingress layer provides flexible routing that is close to the application. Because of this flexibility, the mesh ingress is better than cloud ingress for complex routing logic and application-level visibility. The separation between ingress layers also makes it easier for application owners to directly control this layer without impacting other teams. When you expose service mesh applications through an L4 load balancer instead of an L7 load balancer, you should terminate client TLS at the mesh ingress layer inside the mesh to help secure applications.

Health checking

One complexity of using two layers of L7 load balancing is health checking. You must configure each load balancer to check the health of the next layer to ensure that it can receive traffic. The topology in the following diagram shows how cloud ingress checks the health of the mesh ingress proxies, and the mesh, in return, checks the health of the application backends.

Cloud ingress checks the health of the mesh ingress, and the mesh ingress checks the health of the application backends.

This topology has the following considerations:

  • Cloud ingress. In this tutorial, you configure the Google Cloud load balancer through ingress to check the health of the mesh ingress proxies on their exposed health check ports. If a mesh proxy is down, or if the cluster, mesh, or region is unavailable, the Google Cloud load balancer detects this condition and doesn't send traffic to the mesh proxy.
  • Mesh ingress. In the mesh application, you perform health checks on the backends directly so that you can execute load balancing and traffic management locally.

Security

The preceding topology involves several elements of security. One of the most critical elements is in how you configure encryption and deploy certificates. Ingress for External HTTP(S) Load Balancing has deep integration with Google-managed certificates. This integration automatically provisions public certificates, attaches them to a load balancer, and renews and rotates certificates all through the declarative GKE Ingress interface. Internet clients authenticate against the public certificates and connect to the external load balancer as the first hop in the Virtual Private Cloud (VPC).

The next hop, which is between the Google Front End (GFE) and the mesh ingress proxy, is encrypted by default. Network-level encryption between the GFEs and their backends is applied automatically. However, if your security requirements dictate that the platform owner retain ownership of the encryption keys, then you can enable HTTPS between the cluster ingress (the GFE) and the mesh ingress (the envoy proxy instance). When you enable HTTPs for this path, you can use a self-signed or public certificate to encrypt traffic because the GFE doesn't authenticate against it. To help prevent the mishandling of certificates, do not use the public certificate for the public load balancer elsewhere. Instead, we recommend that you use separate certificates in the service mesh.

If the service mesh mandates TLS, then all traffic is encrypted between sidecar proxies and to the mesh ingress. This tutorial does not enable HTTPS between the GFEs and the mesh ingress layer, although that architecture might be applicable for your environment. The following diagram illustrates HTTPS encryption from the client to the Google Cloud load balancer, from the load balancer to the mesh ingress proxy, and from the ingress proxy to the sidecar proxy.

Security is implemented using managed certificates outside of the mesh and internal certificates inside the mesh.

Objectives

  • Deploy a Google Kubernetes Engine (GKE) cluster on Google Cloud.
  • Deploy an Istio-based Anthos Service Mesh on your GKE cluster.
  • Deploy the Online Boutique application on the GKE cluster that you expose to clients on the internet.
  • Configure GKE Ingress to terminate public HTTPS traffic and direct that traffic to service mesh-hosted applications.

Costs

This tutorial uses the following billable components of Google Cloud:

To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial.

When you finish this tutorial, you can avoid continued billing by deleting the resources you created. For more information, see Cleaning up.

Before you begin

  1. In the Google Cloud Console, on the project selector page, select or create a Google Cloud project.

    Go to the project selector page

  2. Make sure that billing is enabled for your Cloud project. Learn how to confirm that billing is enabled for your project.

  3. In the Cloud Console, activate Cloud Shell.

    Activate Cloud Shell

    You run all of the terminal commands for this tutorial from Cloud Shell.

  4. Set your default Google Cloud project:

    export PROJECT=$(gcloud info --format='value(config.project)')
    export PROJECT_NUMBER=$(gcloud projects describe ${PROJECT} --format="value(projectNumber)")
    gcloud config set project ${PROJECT}
    
  5. Enable the APIs for this tutorial.

    Anthos Service Mesh

    For Anthos Service Mesh, enable the following:

    • Google Kubernetes Engine API (container.googleapis.com)
    • Compute Engine API (compute.googleapis.com)
    • Cloud Monitoring API (monitoring.googleapis.com)
    • Cloud Logging API (logging.googleapis.com)
    • Cloud Trace API (cloudtrace.googleapis.com)
    • Anthos Service Mesh Certificate Authority API (meshca.googleapis.com)
    • Mesh Telemetry API (meshtelemetry.googleapis.com)
    • Mesh Configuration API (meshconfig.googleapis.com)
    • IAM Service Account Credentials API (iamcredentials.googleapis.com)
    • Anthos API (anthos.googleapis.com)
    • Resource Manager API (cloudresourcemanager.googleapis.com)
    gcloud services enable \
        container.googleapis.com \
        compute.googleapis.com \
        monitoring.googleapis.com \
        logging.googleapis.com \
        cloudtrace.googleapis.com \
        meshca.googleapis.com \
        meshtelemetry.googleapis.com \
        meshconfig.googleapis.com \
        iamcredentials.googleapis.com \
        anthos.googleapis.com \
        cloudresourcemanager.googleapis.com
    

    Istio

    For Istio, enable the following:

    • Google Kubernetes Engine API (container.googleapis.com)
    • Compute Engine API (compute.googleapis.com)
    • Cloud Monitoring API (monitoring.googleapis.com)
    • Cloud Logging API (logging.googleapis.com)
    • Cloud Trace API (cloudtrace.googleapis.com)
    • IAM Service Account Credentials API (iamcredentials.googleapis.com)
    • Resource Manager API (cloudresourcemanager.googleapis.com)
    gcloud services enable \
        container.googleapis.com \
        compute.googleapis.com \
        monitoring.googleapis.com \
        logging.googleapis.com \
        cloudtrace.googleapis.com \
        iamcredentials.googleapis.com \
        cloudresourcemanager.googleapis.com
    
  6. Clone the code repository to get the files for this tutorial, and create a working directory:

    mkdir -p ${HOME}/edge-to-mesh
    cd ${HOME}/edge-to-mesh
    export WORKDIR=`pwd`
    

    After you finish the tutorial, you can delete the working directory.

Creating GKE clusters

The features that are described in this tutorial require a GKE cluster version 1.16 or later.

  1. In Cloud Shell, create a new kubeconfig file. This step ensures that you don't create a conflict with your existing (default) kubeconfig file.

    touch edge2mesh_kubeconfig
    export KUBECONFIG=${WORKDIR}/edge2mesh_kubeconfig
    
  2. Define environment variables for the GKE cluster:

    export CLUSTER_NAME=edge-to-mesh
    export CLUSTER_LOCATION=us-west1-a
    
  3. Create a GKE cluster.

    Anthos Service Mesh

    gcloud beta container clusters create ${CLUSTER_NAME} \
        --machine-type=e2-standard-4 \
        --num-nodes=4 \
        --zone ${CLUSTER_LOCATION} \
        --enable-stackdriver-kubernetes \
        --enable-ip-alias \
        --workload-pool=${PROJECT}.svc.id.goog \
        --labels=mesh_id=proj-${PROJECT_NUMBER} \
        --release-channel rapid
    

    Istio

    gcloud beta container clusters create ${CLUSTER_NAME} \
        --machine-type=e2-standard-4 \
        --num-nodes=4 \
        --zone ${CLUSTER_LOCATION} \
        --enable-stackdriver-kubernetes \
        --enable-ip-alias \
        --release-channel rapid
    
  4. Ensure that the cluster is running:

    gcloud container clusters list
    

    The output is similar to the following:

    NAME          LOCATION    MASTER_VERSION  MASTER_IP      MACHINE_TYPE   NODE_VERSION    NUM_NODES  STATUS
    edge-to-mesh  us-west1-a  1.17.9-gke.600  34.83.193.134  n1-standard-4  1.17.9-gke.600  4          RUNNING
    
  5. (For Anthos Service Mesh installations only) Initialize your project to prepare it for installing Anthos Service Mesh.

    curl --request POST \
        --header "Authorization: Bearer $(gcloud auth print-access-token)" \
        --data '' \
        "https://meshconfig.googleapis.com/v1alpha1/projects/${PROJECT}:initialize"
    

    The output is similar to the following:

    {}
    

    This command creates a service account that lets control plane components, such as the sidecar proxy, more securely access your project's data and resources. This step is only required if you install Anthos Service Mesh and is not required if you are using Istio OSS.

  6. Connect to the cluster:

    gcloud container clusters get-credentials edge-to-mesh --zone us-west1-a --project ${PROJECT}
    
  7. (For Anthos Service Mesh installations only) Grant cluster administrator permissions to the current user:

    kubectl create clusterrolebinding cluster-admin-binding \
        --clusterrole=cluster-admin \
        --user="$(gcloud config get-value core/account)"
    

    These permissions let you create the necessary role-based access control (RBAC) rules for Anthos Service Mesh.

Installing a service mesh

  1. In Cloud Shell, download the Istio or Anthos Service Mesh installation file.

    Anthos Service Mesh

    export ASM_VERSION=1.6.5-asm.7
    curl -LO https://storage.googleapis.com/gke-release/asm/istio-${ASM_VERSION}-linux-amd64.tar.gz
    tar xzf istio-${ASM_VERSION}-linux-amd64.tar.gz
    rm -rf istio-${ASM_VERSION}-linux-amd64.tar.gz
    

    Istio

    export ISTIO_VERSION=1.6.7
    curl -L https://istio.io/downloadIstio | ISTIO_VERSION=${ISTIO_VERSION} sh -
    
  2. Add the path to the command-line utility istiocl, which comes bundled with Istio and Anthos Service Mesh. To simplify the installation, this tutorial uses istioctl (a wrapper around kubectl). The path to the utility is added to the PATH variable.

    Anthos Service Mesh

    export PATH=${WORKDIR}/istio-${ASM_VERSION}/bin:$PATH
    istioctl version --remote=false
    

    The output is similar to the following:

    1.6.5-asm.7
    

    Istio

    export PATH=${WORKDIR}/istio-${ISTIO_VERSION}/bin:$PATH
    istioctl version --remote=false
    

    The output is similar to the following:

    1.6.7
    
  3. (For Anthos Service Mesh installations only) Prepare the Anthos Service Mesh configuration file by downloading and configuring the Anthos Service Mesh kpt package:

    sudo apt-get install google-cloud-sdk-kpt
    kpt pkg get \
    https://github.com/GoogleCloudPlatform/anthos-service-mesh-packages.git/asm@release-1.6-asm
    ${WORKDIR}/asm
    kpt cfg set ${WORKDIR}/asm gcloud.container.cluster ${CLUSTER_NAME}
    kpt cfg set ${WORKDIR}/asm gcloud.core.project ${PROJECT}
    kpt cfg set ${WORKDIR}/asm gcloud.compute.location ${CLUSTER_LOCATION}
    
  4. Install the Istio control plane or Anthos Service Mesh control plane on the edge-to-mesh cluster by using istioctl, its operator, and an augmented deployment profile named demo. As part of the installation, the demo profile installs all components, including the observability components such as Grafana, Kiali, or Jaeger.

    Anthos Service Mesh

    istioctl install -f ${WORKDIR}/asm/cluster/istio-operator.yaml --set values.gateways.istio-ingressgateway.type=ClusterIP
    

    The output is similar to the following:

    ! global.mtls.enabled is deprecated; use the PeerAuthentication resource instead
    ✔ Istio core installed
    ✔ Istiod installed
    ✔ Ingress gateways installed
    ✔ Installation complete
    

    Istio

    istioctl install --set profile=demo --set values.gateways.istio-ingressgateway.type=ClusterIP
    

    The output is similar to the following:

    ✔ Istio core installed
    ✔ Istiod installed
    ✔ Egress gateways installed
    ✔ Ingress gateways installed
    ✔ Addons installed
    ✔ Installation complete
    
  5. Ensure that all deployments are up and running:

    kubectl wait --for=condition=available --timeout=600s deployment --all -n istio-system
    

    For Istio, the output is similar to the following:

    deployment.extensions/grafana condition met
    deployment.extensions/istio-egressgateway condition met
    deployment.extensions/istio-ingressgateway condition met
    deployment.extensions/istio-tracing condition met
    deployment.extensions/istiod condition met
    deployment.extensions/kiali condition met
    deployment.extensions/prometheus condition met
    

    For Anthos Service Mesh, the output is similar to the following:

    deployment.apps/istio-ingressgateway condition met
    deployment.apps/istiod condition met
    deployment.apps/promsd condition met
    

Installing the Online Boutique sample app

  1. In Cloud Shell, add a namespace label to the default namespace:

    kubectl label namespace default istio-injection=enabled
    

    Labeling the namespace namespace instructs Istio to automatically inject Envoy sidecar proxies when an application is deployed. The output is similar to the following:

    namespace/default labeled
    
  2. Download the Kubernetes and Istio YAML files for the Online Boutique sample app:

    curl -LO \
        https://raw.githubusercontent.com/GoogleCloudPlatform/microservices-demo/master/release/kubernetes-manifests.yaml
    curl -LO \
        https://raw.githubusercontent.com/GoogleCloudPlatform/microservices-demo/master/release/istio-manifests.yaml
    
  3. Deploy the Online Boutique app:

    kubectl apply -f kubernetes-manifests.yaml
    

    The output is similar to the following:

    deployment.apps/frontend created
    service/frontend created
    service/frontend-external created
    ...
    
  4. Ensure that all deployments are up and running:

    kubectl get pods
    

    The output is similar to the following:

    NAME                                     READY   STATUS    RESTARTS   AGE
    adservice-d854d8786-fjb7q                2/2     Running   0          3m
    cartservice-85b5d5b4ff-8qn7g             2/2     Running   0          2m59s
    checkoutservice-5f9bf659b8-sxhsq         2/2     Running   0          3m1s
    ...
    
  5. Deploy the Istio manifests:

    kubectl apply -f istio-manifests.yaml
    

    The output is similar to the following:

    virtualservice.networking.istio.io/frontend created
    gateway.networking.istio.io/frontend-gateway created
    virtualservice.networking.istio.io/frontend-ingress created
    serviceentry.networking.istio.io/whitelist-egress-googleapis created
    serviceentry.networking.istio.io/whitelist-egress-google-metadata created
    

Deploying GKE Ingress

In the following steps, you deploy the external HTTP(S) load balancer through GKE's Ingress controller. The Ingress resource automates the provisioning of the load balancer, its TLS certificates, and backend health checking. Additionally, you use Cloud Endpoints to automatically provision a public DNS name for the application.

Apply backend Service settings

  1. In Cloud Shell, look at the istio-ingressgateway Service to see how it is deployed before you apply any customizations:

    kubectl get svc -n istio-system istio-ingressgateway
    

    The output displays the Service annotations from the default installation. In the next step, you update the Service annotations in order to customize the Service for this tutorial.

    NAME                   TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                                        AGE
    istio-ingressgateway   ClusterIP   10.44.15.44   <none>        15021/TCP,80/TCP,443/TCP,31400/TCP,15443/TCP   5m57s
    

    Earlier in this tutorial, you used the Anthos Service Mesh or Istio installation profiles to deploy the istio-ingressgateway as a ClusterIP Service. This customized profile doesn't deploy a TCP/UDP load balancer as a part of the service mesh deployment because this tutorial exposes apps through HTTP-based Ingress resources. At this point in the tutorial, the application cannot be accessed from outside the cluster because nothing is exposing it. In the following steps, you deploy the components that are necessary to expose the mesh-hosted application through the external HTTP(S) load balancer.

  2. Save the following manifest to a file that is named ingress-service-patch.yaml:

    cat <<EOF > ingress-service-patch.yaml
    kind: Service
    metadata:
      name: istio-ingressgateway
      namespace: istio-system
      annotations:
        cloud.google.com/backend-config: '{"default": "ingress-backendconfig"}'
        cloud.google.com/neg: '{"ingress": true}'
    EOF
    

    This command patches the istio-ingressgateway Service with custom annotations that GKE Ingress uses.

    This Service has the following annotations that set parameters for the Ingress load balancer when it's deployed:

    • cloud.google.com/backend-config refers to the name of a custom resource called BackendConfig. The Ingress controller uses BackendConfig to set parameters on the Google Cloud BackendService resource. You use this resource in the next step to define custom parameters of the Google Cloud health check.
    • cloud.google.com/neg: '{"ingress": true}' enables the Ingress backends (the mesh ingress proxies in this case) for container-native load balancing. For more efficient and stable load balancing, these backends use network endpoint groups (NEGs) instead of instance groups.
  3. Apply the annotations from the patch to the istio-ingressgateway Service:

    kubectl patch svc istio-ingressgateway -n istio-system --patch "$(cat ingress-service-patch.yaml)"
    

    The output is similar to the following:

    service/istio-ingressgateway patched
    
  4. Save the following BackendConfig manifest as ingress-backendconfig.yaml:

    cat <<EOF > ingress-backendconfig.yaml
    apiVersion: cloud.google.com/v1
    kind: BackendConfig
    metadata:
      name: ingress-backendconfig
      namespace: istio-system
    spec:
      healthCheck:
        requestPath: /healthz/ready
        port: 15021
        type: HTTP
      securityPolicy:
        name: edge-fw-policy
    EOF
    

    BackendConfig is a Custom Resource Definition (CRD) that defines backend parameters for Ingress load balancing. For a complete list of the backend and frontend parameters that you can configure through GKE Ingress, see Ingress features.

    In this tutorial, the BackendConfig manifest specifies custom health checks for the mesh ingress proxies. Anthos Service Mesh and Istio expose their sidecar proxy health checks on port 15021 at the /healthz/ready path. Custom health check parameters are required because the serving port (80) of mesh ingress proxies is different from their health check port (15021). GKE Ingress uses the following health check parameters in BackendConfig to configure the Google Cloud load balancer health checks. A security policy that helps protect load balancing traffic from different kinds of network attacks is also referenced.

    • healthCheck.port defines the port that receives a health check by the Google Cloud load balancer on each Pod's IP address.
    • healthCheck.requestPath defines the HTTP path that receives a health check on the specified port.
    • type defines the protocol of the health check (in this case, HTTP).
    • securityPolicy.name refers to the name of a Cloud Armor security policy.
  5. Deploy ingress-backendconfig.yaml in your cluster to create the BackendConfig resource:

    kubectl apply -f ingress-backendconfig.yaml
    

    The output is similar to the following:

    backendconfig.cloud.google.com/ingress-backendconfig created
    

    The BackendConfig parameters and the istio-ingressgateway Service annotations are not applied to a Google Cloud load balancer until the Ingress resource is deployed. The Ingress deployment ties all of these resources together.

Define security policies

Google Cloud Armor provides DDoS defense and customizable security policies that you can attach to a load balancer through Ingress resources. In the following steps, you create a security policy that uses preconfigured rules to block cross-site scripting (XSS) attacks. This rule helps block traffic that matches known attack signatures but allows all other traffic. Your environment might use different rules depending on what your workload is.

  1. In Cloud Shell, create a security policy that is called edge-fw-policy:

    gcloud compute security-policies create edge-fw-policy \
        --description "Block XSS attacks"
    
  2. Create a security policy rule that uses the preconfigured XSS filters:

    gcloud compute security-policies rules create 1000 \
        --security-policy edge-fw-policy \
        --expression "evaluatePreconfiguredExpr('xss-stable')" \
        --action "deny-403" \
        --description "XSS attack filtering"
    

The edge-fw-policy was referenced by ingress-backendconfig in the previous section. When the Ingress resource is deployed, it binds this security policy with the load balancer to help protect any backends of the istio-ingressgateway Service.

Configure IP addressing and DNS

  1. In Cloud Shell, create a global static IP for the Google Cloud load balancer:

    gcloud compute addresses create ingress-ip --global
    

    This static IP is used by the Ingress resource and allows the IP to remain the same, even if the external load balancer changes.

  2. Get the static IP address:

    export GCLB_IP=$(gcloud compute addresses describe ingress-ip --global --format=json | jq -r '.address')
    echo ${GCLB_IP}
    

    To create a stable, human-friendly mapping to your Ingress IP, you must have a public DNS record. You can use any DNS provider and automation that you want. This tutorial uses Endpoints instead of creating a managed DNS zone. Endpoints provides a free Google-managed DNS record for a public IP.

  3. Save the following YAML specification to a file named dns-spec.yaml:

    cat <<EOF > dns-spec.yaml
    swagger: "2.0"
    info:
      description: "Cloud Endpoints DNS"
      title: "Cloud Endpoints DNS"
      version: "1.0.0"
    paths: {}
    host: "frontend.endpoints.${PROJECT}.cloud.goog"
    x-google-endpoints:
    - name: "frontend.endpoints.${PROJECT}.cloud.goog"
      target: "${GCLB_IP}"
    EOF
    

    The YAML specification defines the public DNS record in the form of frontend.endpoints.${PROJECT}.cloud.goog, where ${PROJECT} is your unique project number.

  4. Deploy the dns-spec.yaml file in your Cloud project:

    gcloud endpoints services deploy dns-spec.yaml
    

    The output is similar to the following:

    Operation finished successfully. The following command can describe the Operation details:
     gcloud endpoints operations describe operations/rollouts.frontend.endpoints.edge2mesh.cloud.goog:442b2b38-4aee-4c60-b9fc-28731657ee08
    
    Service Configuration [2020-04-28r0] uploaded for service [frontend.endpoints.edge2mesh.cloud.goog]
    

    Now that the IP and DNS are configured, you can generate a public certificate to secure the Ingress frontend. GKE Ingress supports Google-managed certificates as Kubernetes resources, which lets you provision them through declarative means.

Provision a TLS certificate

  1. In Cloud Shell, save the following YAML manifest as managed-cert.yaml:

    cat <<EOF > managed-cert.yaml
    apiVersion: networking.gke.io/v1beta2
    kind: ManagedCertificate
    metadata:
      name: gke-ingress-cert
      namespace: istio-system
    spec:
      domains:
        - "frontend.endpoints.${PROJECT}.cloud.goog"
    EOF
    

    This YAML file specifies that the DNS name created through Endpoints is used to provision a public certificate. Because Google fully manages the lifecycle of these public certificates, they are automatically generated and rotated on a regular basis without direct user intervention.

  2. Deploy the managed-cert.yaml file in your GKE cluster:

    kubectl apply -f managed-cert.yaml
    

    The output is similar to the following:

    managedcertificate.networking.gke.io/gke-ingress-cert created
    
  3. Inspect the ManagedCertificate resource to check the progress of certificate generation:

    kubectl describe managedcertificate gke-ingress-cert -n istio-system
    

    The output is similar to the following:

    Name:         gke-ingress-cert
    Namespace:    istio-system
    Labels:       <none>
    Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                    {"apiVersion":"networking.gke.io/v1beta2","kind":"ManagedCertificate","metadata":{"annotations":{},"name":"gke-ingress-cert","namespace":"...
    API Version:  networking.gke.io/v1beta2
    Kind:         ManagedCertificate
    Metadata:
      Creation Timestamp:  2020-08-05T20:44:49Z
      Generation:          2
      Resource Version:    1389781
      Self Link:           /apis/networking.gke.io/v1beta2/namespaces/istio-system/managedcertificates/gke-ingress-cert
      UID:                 d74ec346-ced9-47a8-988a-6e6e9ddc4019
    Spec:
      Domains:
        frontend.endpoints.edge2mesh.cloud.goog
    Status:
      Certificate Name:    mcrt-306c779e-8439-408a-9634-163664ca6ced
      Certificate Status:  Provisioning
      Domain Status:
        Domain:  frontend.endpoints.edge2mesh.cloud.goog
        Status:  Provisioning
    Events:
      Type    Reason  Age   From                            Message
      ----    ------  ----  ----                            -------
      Normal  Create  44s   managed-certificate-controller  Create SslCertificate mcrt-306c779e-8439-408a-9634-163664ca6ced
    

    When the certificate is ready, the Certificate Status is Active.

Deploy the Ingress resource

  1. In Cloud Shell, save the following Ingress manifest as ingress.yaml:

    cat <<EOF > ingress.yaml
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: gke-ingress
      namespace: istio-system
      annotations:
        kubernetes.io/ingress.allow-http: "false"
        kubernetes.io/ingress.global-static-ip-name: "ingress-ip"
        networking.gke.io/managed-certificates: "gke-ingress-cert"
    spec:
      rules:
      - host: frontend.endpoints.${PROJECT}.cloud.goog
        http:
          paths:
          - backend:
              serviceName: istio-ingressgateway
              servicePort: 80
    EOF
    

    This manifest defines an Ingress resource that ties all of the previous resources together. The manifest specifies the following fields:

    • kubernetes.io/ingress.allow-http: "false" disables HTTP traffic on port 80 of the Google Cloud load balancer. This effectively prevents any clients connecting with unencrypted traffic because port 443 only listens for HTTPS, and port 80 is disabled.
    • kubernetes.io/ingress.global-static-ip-name: "${GCLB_IP}" links the previously created IP address with the load balancer. This link allows the IP address to be created separately from the load balancer so that it can be reused separately from the load balancer lifecycle.
    • networking.gke.io/managed-certificates: "gke-ingress-cert" links this load balancer with the previously created Google-managed SSL Certificate resource.
    • host: frontend.endpoints.${project}.cloud.google.com specifies the HTTP host header that this load balancer IP is listening for. It uses the DNS name that you use to advertise your application.
  2. Deploy ingress.yaml in your cluster:

    kubectl apply -f ingress.yaml
    
  3. Inspect the Ingress resource to check the progress of the load balancer deployment:

    kubectl describe ingress gke-ingress -n istio-system
    

    The output is similar to the following:

    ...
    Annotations:
      ingress.kubernetes.io/https-forwarding-rule:       k8s2-fs-fq3ng2uk-istio-system-gke-ingress-qm3qqdor
      ingress.kubernetes.io/ssl-cert:                    mcrt-306c779e-8439-408a-9634-163664ca6ced
      networking.gke.io/managed-certificates:            gke-ingress-cert
      kubernetes.io/ingress.global-static-ip-name:  ingress-ip
      ingress.gcp.kubernetes.io/pre-shared-cert:    mcrt-306c779e-8439-408a-9634-163664ca6ced
      ingress.kubernetes.io/backends:               {"k8s-be-31610--07bdde06b914144a":"HEALTHY","k8s1-07bdde06-istio-system-istio-ingressgateway-443-228c1881":"HEALTHY"}
      ingress.kubernetes.io/forwarding-rule:        k8s2-fr-fq3ng2uk-istio-system-gke-ingress-qm3qqdor
      ingress.kubernetes.io/https-target-proxy:     k8s2-ts-fq3ng2uk-istio-system-gke-ingress-qm3qqdor
      ingress.kubernetes.io/target-proxy:           k8s2-tp-fq3ng2uk-istio-system-gke-ingress-qm3qqdor
      ingress.kubernetes.io/url-map:                k8s2-um-fq3ng2uk-istio-system-gke-ingress-qm3qqdor
    ...
    

    The Ingress resource is ready when the ingress.kubernetes.io/backends annotations indicate that the backends are HEALTHY. The annotations also show the names of different Google Cloud resources that are provisioned, including backend services, SSL certificates, and HTTPS target proxies.

    After your certificate is provisioned and the Ingress is ready, your application is accessible.

  4. Access the following link:

    echo "https://frontend.endpoints.${PROJECT}.cloud.goog"
    

    Your Online Boutique frontend is displayed.

    Products shown on Online Boutique home page.

  5. To display the details of your certificate, click View site information in your browser's address bar, and then click Certificate (Valid).

    The certificate viewer displays details for the managed certificate, including the expiration date and who issued the certificate.

You now have a global HTTPS load balancer serving as a frontend to your service mesh-hosted application.

Cleaning up

After you've finished the tutorial, you can clean up the resources you created on Google Cloud so you won't be billed for them in the future. You can either delete the project entirely or delete cluster resources and then delete the cluster.

Delete the project

  1. In the Cloud Console, go to the Manage resources page.

    Go to the Manage resources page

  2. In the project list, select the project that you want to delete and then click Delete .
  3. In the dialog, type the project ID and then click Shut down to delete the project.

Delete the individual resources

If you want to keep the Cloud project you used in this tutorial, delete the individual resources:

  1. Delete the Ingress resource:

    kubectl delete -f gke-ingress.yaml
    
  2. Delete the managed certificate:

    kubectl delete -f managed-cert.yaml
    
  3. Delete the Endpoints DNS entry:

    gcloud endpoints services delete "frontend.endpoints.${PROJECT}.cloud.goog"
    

    The output is similar to the following:

    Are you sure? This will set the service configuration to be deleted, along
    with all of the associated consumer information. Note: This does not
    immediately delete the service configuration or data and can be undone using
    the undelete command for 30 days. Only after 30 days will the service be
    purged from the system.
    
  4. When you are prompted to continue, enter Y.

    The output is similar to the following:

    Waiting for async operation operations/services.frontend.endpoints.edge2mesh.cloud.goog-5 to complete...
    Operation finished successfully. The following command can describe the Operation details:
     gcloud endpoints operations describe operations/services.frontend.endpoints.edge2mesh.cloud.goog-5
    
  5. Delete the static IP address:

    gcloud compute addresses delete ingress-ip --global
    

    The output is similar to the following:

    The following global addresses will be deleted:
    
     - [ingress-ip]
    
  6. When you are prompted to continue, enter Y.

    The output is similar to the following:

    Deleted
    [https://www.googleapis.com/compute/v1/projects/edge2mesh/global/addresses/ingress-ip].
    
  7. Delete the GKE cluster:

    gcloud container clusters delete $CLUSTER_NAME --zone $CLUSTER_LOCATION
    

What's next