Integrating IAP with Anthos Service Mesh

This tutorial describes how to integrate Identity-Aware Proxy (IAP) with Anthos Service Mesh. The IAP integration with Anthos Service Mesh enables you to safely access services based on Google's BeyondCorp principles. IAP verifies user identity and context of the request to determine if a user should be allowed to access an application or resource. The IAP integration with Anthos Service Mesh provides you with the following benefits:

  • Complete context-aware access control to the workloads running on Anthos Service Mesh. You can set fine-grained access policies based on attributes of the originating request, such as user identity, the IP address, and device type. You can combine your access policies with restrictions based on the hostname and path of a request URL.

  • Enable support for context-aware claims in Anthos Service Mesh authorization.

  • Scalable, secure, and highly available access to your application through a Google Cloud load balancer. High performance load balancing provides built-in protection of distributed denial-of-service (DDoS) attacks and support for global anycast IP addressing.

Objectives

  • Get set up:
    1. Set up your Cloud project to grant the permissions and enable the Google APIs required by IAP.
    2. Reserve an external static IP address and configure a domain name to use the IP address, which the load balancer needs.
    3. Set up a new Google Kubernetes Engine (GKE) cluster with the options required to integrate IAP with Anthos Service Mesh.
    4. Install Anthos Service Mesh with the options required for the integration.
    5. Deploy a sample application.
    6. Deploy the load balancer.
  • Enable IAP.

  • Enable RCToken support on the service mesh.

Costs

This tutorial uses the following billable components of Google Cloud:

To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial.

When you finish this tutorial, you can avoid continued billing by deleting the resources you created. For more information, see Cleaning up.

Before you begin

Requirements

  • You must have an Anthos trial license and subscription. See the Anthos Pricing guide for details.

  • Your GKE cluster must meet the following requirements:

  • Review Requirements for Pods and Services before you deploy workloads.

  • If you are installing Anthos Service Mesh on a private cluster, you must add a firewall rule to open port 9443 if you want to use automatic sidecar injection. If you don't add the firewall rule and automatic sidecar injection is enabled, you get an error when you deploy workloads. For details on adding a firewall rule, see Adding firewall rules for specific use cases.

  • Your cluster must be registered to an Anthos Environ using Connect for Anthos. Your project's Environ provides a unified way to view and manage your clusters and their workloads as part of Anthos, including clusters outside Google Cloud. Anthos charges apply only to your registered clusters. You can find out how to register a cluster in Registering a cluster.

Setting up your environment

You can follow the installation guides using Cloud Shell, an in-browser command line interface to your Google Cloud resources, or your own computer running Linux or macOS.

Option A: Use Cloud Shell

Cloud Shell provisions a g1-small Compute Engine virtual machine (VM) running a Debian-based Linux operating system. The advantages to using Cloud Shell are:

  • Cloud Shell includes the gcloud, kubectl and helm command-line tools that you need.

  • Your Cloud Shell $HOME directory has 5GB persistent storage space.

  • You have your choice of text editors:

    • Code editor, which you access by clicking edit at the top of the Cloud Shell window.

    • Emacs, Vim, or Nano, which you access from the command line in Cloud Shell.

To use Cloud Shell:

  1. Go to the Cloud Console.
  2. Select your Cloud project.
  3. Click the Activate Cloud Shell button at the top of the Cloud Console window.

    Google Cloud Platform console

    A Cloud Shell session opens inside a new frame at the bottom of the Cloud Console and displays a command-line prompt.

    Cloud Shell session

Option B: Use command-line tools locally

On your local machine, install the following tools if you don't already have them:

  1. Install and initialize the Cloud SDK (the gcloud command-line tool).

    If you already have the Cloud SDK installed, make sure to update the components:

    gcloud components update
    
  2. Install kubectl:

    gcloud components install kubectl
    

Setting up your project

  1. Authenticate with the Cloud SDK:
    gcloud auth login
  2. Get your Cloud project ID and create an environment variable for it:
    export PROJECT_ID=YOUR_PROJECT_ID
  3. Set the default project ID for the gcloud command-line tool:
    gcloud config set project ${PROJECT_ID}
  4. Create an environment variable for the project number:
    export PROJECT_NUMBER=$(gcloud projects describe ${PROJECT_ID} --format="value(projectNumber)")
  5. Set the required Cloud Identity and Access Management (Cloud IAM) roles. You need the following roles on the Cloud project:
    Role name Role ID Description
    Project Editor roles/editor Permissions for actions that modify state, such as changing existing resources.
    Kubernetes Engine Admin roles/container.admin Provides access to full management of Container Clusters and their Kubernetes API objects.
    Project IAM Admin roles/resourcemanager.projectIamAdmin Provides permissions to administer Cloud IAM policies on projects.

    If you are a Project Owner, you don't need to add those roles because the Project Owner role has all the necessary permissions. For details on setting the roles, see Granting, changing, and revoking access to resources.

  6. Enable the following APIs:
    gcloud services enable \
        container.googleapis.com \
        compute.googleapis.com \
        stackdriver.googleapis.com \
        meshca.googleapis.com \
        meshtelemetry.googleapis.com \
        meshconfig.googleapis.com \
        iamcredentials.googleapis.com \
        anthos.googleapis.com \
        iap.googleapis.com
    

    Enabling the APIs can take a minute or more to complete.

Reserve a static IP address and configure DNS

To integrate Identity-Aware Proxy with Anthos Service Mesh, you have to set up a Google Cloud HTTP(S) load balancer, which requires a domain name that points to a static IP address. You can reserve a static external IP address, which assigns the address to your project indefinitely until you explicitly release it.

  1. Reserve a static external IP address:

    gcloud compute addresses create example-static-ip --global
    
  2. Get the static IP address:

    gcloud compute addresses describe example-static-ip --global
    
  3. In your domain name registrar, configure a fully qualified domain name (FQDN) with the static IP address. Typically, you add an A record to your DNS settings. The configuration steps and terminology for adding an A record for a FQDN vary depending on your domain name registrar.

  4. Set the domain name in an environment variable:

    export DOMAIN_NAME=YOUR_DOMAIN_NAME

    It can take 24 to 48 hours for the DNS setting to propagate. You can continue setting up everything in this tutorial, but you won't be able to test the setup until the DNS settings propagate.

Setting up a new GKE cluster

This section explains the basics of creating a GKE cluster with the options that are required for Anthos Service Mesh. For more information, see Creating a cluster.

To set up a new cluster:

  1. Select a zone, a machine type, and a GKE [release channel]((/kubernetes-engine/docs/concepts/release-channels) for the new cluster. The minimum machine type required by Anthos Service Mesh is n1-standard-4. You can use any release channel option.

    1. To get a list of the available GCP zones:

      gcloud compute zones list
      
    2. To get a list of machine types:

      gcloud compute machine-types list | more
      
  2. Create the following environment variables:

    export CLUSTER_NAME=YOUR_CLUSTER_NAME

    The cluster name must contain only lowercase alphanumerics and '-', must start with a letter and end with an alphanumeric, and must be no longer than 40 characters.

    export CLUSTER_ZONE=YOUR_CLUSTER_ZONE
    export IDNS=${PROJECT_ID}.svc.id.goog
    export MESH_ID="proj-${PROJECT_NUMBER}"
  3. Set the default zone for the gcloud command-line tool:

    gcloud config set compute/zone ${CLUSTER_ZONE}
    

    Tip: To make setting up your shell environment easier in the future, you can copy and paste the export statements for each environment variable to a simple shell script that you source when you start a new shell. You can also add the gcloud commands that set the default zone and project to the script. Or you can use gcloud config configurations to create and activate a named gcloud configuration.

  4. Create the cluster with the options required by Anthos Service Mesh. The following command creates a cluster containing 4 nodes of machine type n1-standard-4, which has 4 vCPUs. This is the minimum machine type and number of nodes required for Anthos Service Mesh. You can specify another machine type as long as it has at least 4 vCPUs, and you can increase the number of nodes as needed for your system requirements.

    gcloud beta container clusters create ${CLUSTER_NAME} \
        --machine-type=n1-standard-4 \
        --num-nodes=4 \
        --identity-namespace=${IDNS} \
        --enable-stackdriver-kubernetes \
        --subnetwork=default \
        --labels mesh_id=${MESH_ID} \
        --addons=HttpLoadBalancing \
        --release-channel regular
    

    The clusters create command includes:

    • identity-namespace=${IDNS}: Enables Workload Identity, which is the recommended way to safely access Google Cloud services from GKE applications.

    • enable-stackdriver-kubernetes: Enables Cloud Monitoring and Cloud Logging on GKE.

    • subnetwork=default: Creates a default subnetwork.

    • labels mesh_id=${MESH_ID}: Sets the mesh_id label on the cluster, which is required for metrics to get displayed on the Anthos Service Mesh Dashboard in the Cloud Console.

    • The HttpLoadBalancing add-on, which enables an HTTP (L7) load balancing controller for the cluster.

    • release-channel regular: Enrolls the cluster in the regular release channel, though you can choose stable if you require greater stability or rapid if you want to try out new (unsupported) GKE features.

  5. Register your cluster with your Anthos Environ following the instructions in Registering a cluster.

Preparing to install Anthos Service Mesh

Before installing Anthos Service Mesh, you need to:

  • Set required credentials and permissions.
  • Download and extract the Anthos Service Mesh installation file.

Set credentials and permissions

  1. Initialize your project to ready it for installation. Among other things, this command creates a service account to let Istio components, such as the sidecar proxy, securely access your project's data and resources:
    curl --request POST \
      --header "Authorization: Bearer $(gcloud auth print-access-token)" \
      --data '' \
      https://meshconfig.googleapis.com/v1alpha1/projects/${PROJECT_ID}:initialize

    The command responds with empty curly braces: {}

    If you install a new version of Anthos Service Mesh on this cluster in the future, you don't need to re-run the command, but running the command again doesn't affect your installation.

  2. Get authentication credentials to interact with the cluster:
    gcloud container clusters get-credentials ${CLUSTER_NAME}

    This command configures kubectl to use the specified cluster.

  3. Grant cluster admin permissions to the current user. You need these permissions to create the necessary role based access control (RBAC) rules for Anthos Service Mesh:
    kubectl create clusterrolebinding cluster-admin-binding \
      --clusterrole=cluster-admin \
      --user="$(gcloud config get-value core/account)"

    If you see the "cluster-admin-binding" already exists error, you can safely ignore it and continue with the existing cluster-admin-binding.

Download the installation file

Linux

  1. Download the Anthos Service Mesh installation file to your current working directory:
    curl -LO https://storage.googleapis.com/gke-release/asm/istio-1.4.7-asm.0-linux.tar.gz
  2. Download the signature file and use openssl to verify the signature:
    curl -LO https://storage.googleapis.com/gke-release/asm/istio-1.4.7-asm.0-linux.tar.gz.1.sig
    openssl dgst -verify - -signature istio-1.4.7-asm.0-linux.tar.gz.1.sig istio-1.4.7-asm.0-linux.tar.gz <<'EOF'
    -----BEGIN PUBLIC KEY-----
    MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEWZrGCUaJJr1H8a36sG4UUoXvlXvZ
    wQfk16sxprI2gOJ2vFFggdq3ixF2h4qNBt0kI7ciDhgpwS8t+/960IsIgw==
    -----END PUBLIC KEY-----
    EOF

    The expected output is: Verified OK

  3. Extract the contents of the file to any location on your file system. For example, to extract the contents to the current working directory:
    tar xzf istio-1.4.7-asm.0-linux.tar.gz

    The command creates an installation directory in your current working directory named istio-1.4.7-asm.0 that contains:

    • Sample applications in samples
    • The following tools in the bin directory:
      • istioctl: You use istioctl to install Anthos Service Mesh.
      • asmctl: You use asmctl to help validate your security configuration after installing Anthos Service Mesh.

  4. Ensure that you're in the Anthos Service Mesh installation's root directory.
    cd istio-1.4.7-asm.0
  5. For convenience, add the tools in the /bin directory to your PATH:
    export PATH=$PWD/bin:$PATH

Mac OS

  1. Download the Anthos Service Mesh installation file to your current working directory:
    curl -LO https://storage.googleapis.com/gke-release/asm/istio-1.4.7-asm.0-osx.tar.gz
  2. Download the signature file and use openssl to verify the signature:
    curl -LO https://storage.googleapis.com/gke-release/asm/istio-1.4.7-asm.0-osx.tar.gz.1.sig
    openssl dgst -verify - -signature istio-1.4.7-asm.0-osx.tar.gz.1.sig istio-1.4.7-asm.0-osx.tar.gz <<'EOF'
    -----BEGIN PUBLIC KEY-----
    MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEWZrGCUaJJr1H8a36sG4UUoXvlXvZ
    wQfk16sxprI2gOJ2vFFggdq3ixF2h4qNBt0kI7ciDhgpwS8t+/960IsIgw==
    -----END PUBLIC KEY-----
    EOF

    The expected output is: Verified OK

  3. Extract the contents of the file to any location on your file system. For example, to extract the contents to the current working directory:
    tar xzf istio-1.4.7-asm.0-osx.tar.gz

    The command creates an installation directory in your current working directory named istio-1.4.7-asm.0 that contains:

    • Sample applications in samples
    • The istioctl client binary in the bin directory. You can use istioctl to manually inject Envoy as a sidecar proxy and to create routing rules and policies.

  4. Ensure that you're in the Anthos Service Mesh installation's root directory.
    cd istio-1.4.7-asm.0
  5. For convenience, add the istioctl client to your PATH:
    export PATH=$PWD/bin:$PATH

Windows

  1. Download the Anthos Service Mesh installation file to your current working directory:
    curl -LO https://storage.googleapis.com/gke-release/asm/istio-1.4.7-asm.0-win.zip
  2. Download the signature file and use openssl to verify the signature:
    curl -LO https://storage.googleapis.com/gke-release/asm/istio-1.4.7-asm.0-win.zip.1.sig
    openssl dgst -verify - -signature istio-1.4.7-asm.0-win.zip.1.sig istio-1.4.7-asm.0-win.zip <<'EOF'
    -----BEGIN PUBLIC KEY-----
    MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEWZrGCUaJJr1H8a36sG4UUoXvlXvZ
    wQfk16sxprI2gOJ2vFFggdq3ixF2h4qNBt0kI7ciDhgpwS8t+/960IsIgw==
    -----END PUBLIC KEY-----
    EOF

    The expected output is: Verified OK

  3. Extract the contents of the file to any location on your file system. There is a directory named istio-1.4.7-asm.0 that contains:
    • Sample applications in samples
    • The istioctl client binary in the bin directory. You can use istioctl to manually inject Envoy as a sidecar proxy and to create routing rules and policies.

  4. Ensure that you're in the Anthos Service Mesh installation's root directory.
    cd istio-1.4.7-asm.0
  5. For convenience, add the istioctl client to your PATH.

Installing Anthos Service Mesh

Install Anthos Service Mesh and set the options needed to integrate Anthos Service Mesh with IAP.

PERMISSIVE mTLS

istioctl manifest apply --set profile=asm \
  --set values.gateways.istio-ingressgateway.type=NodePort \
  --set values.global.trustDomain=${IDNS} \
  --set values.global.sds.token.aud=${IDNS} \
  --set values.nodeagent.env.GKE_CLUSTER_URL=https://container.googleapis.com/v1/projects/${PROJECT_ID}/locations/${CLUSTER_ZONE}/clusters/${CLUSTER_NAME} \
  --set values.global.meshID=${MESH_ID} \
  --set values.global.proxy.env.GCP_METADATA="${PROJECT_ID}|${PROJECT_NUMBER}|${CLUSTER_NAME}|${CLUSTER_ZONE}"

STRICT mTLS

istioctl manifest apply --set profile=asm \
  --set values.gateways.istio-ingressgateway.type=NodePort \
  --set values.global.trustDomain=${IDNS} \
  --set values.global.sds.token.aud=${IDNS} \
  --set values.nodeagent.env.GKE_CLUSTER_URL=https://container.googleapis.com/v1/projects/${PROJECT_ID}/locations/${CLUSTER_ZONE}/clusters/${CLUSTER_NAME} \
  --set values.global.meshID=${MESH_ID} \
  --set values.global.proxy.env.GCP_METADATA="${PROJECT_ID}|${PROJECT_NUMBER}|${CLUSTER_NAME}|${CLUSTER_ZONE}" \
  --set values.global.mtls.enabled=true

You specify NodePort for the istio-ingressgateway, which configures {[mesh_name]} to open a specific port on the service mesh. This allows you to set up a load balancer, which routes traffic sent to your domain name to this port. The other options enable Anthos Service Mesh certificate authority (Mesh CA).

Wait for the deployment to finish:

kubectl wait --for=condition=available --timeout=600s deployment --all -n istio-system

Output:

deployment.extensions/istio-galley condition met
deployment.extensions/istio-ingressgateway condition met
deployment.extensions/istio-pilot condition met
deployment.extensions/istio-sidecar-injector condition met
deployment.extensions/promsd condition met

Verifying the installation

Check that the control plane Pods in istio-system are up:

kubectl get pod -n istio-system

Expect to see output similar to the following:

NAME                                      READY   STATUS      RESTARTS   AGE
istio-galley-5c65896ff7-m2pls             2/2     Running     0          18m
istio-ingressgateway-587cd459f-q6hqt      2/2     Running     0          18m
istio-nodeagent-74w69                     1/1     Running     0          18m
istio-nodeagent-7524w                     1/1     Running     0          18m
istio-nodeagent-7652w                     1/1     Running     0          18m
istio-nodeagent-7948w                     1/1     Running     0          18m
istio-pilot-9db77b99f-7wfb6               2/2     Running     0          18m
istio-sidecar-injector-69c4d9f875-dt8rn   1/1     Running     0          18m
promsd-55f464d964-lqs7w                   2/2     Running     0          18m

You should see an instance of the istio-nodeagent for each node in your cluster. Mesh CA, which replaces the Citadel OSS Istio component, creates the node agents to issue mTLS certificates for the workloads running in your service mesh.

Deploying a sample application

Before you enable IAP, you need an application running on your GKE cluster so you can verify that all requests have an identity. This guide uses the Bookinfo sample to demonstrate how to setup the HTTP(S) load balancer and enable IAP.

Start the application services

  1. Change directory to the root of the Anthos Service Mesh installation.

  2. Label the default namespace to use automatic sidecar injection:

    kubectl label namespace default istio-injection=enabled
    
  3. Deploy the application:

    kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
    
  4. Confirm all bookinfo services are running:

    kubectl get services
    

    The expected output is similar to:

    NAME                       CLUSTER-IP   EXTERNAL-IP   PORT(S)              AGE
    details                    10.0.0.31            9080/TCP             6m
    kubernetes                 10.0.0.1             443/TCP              7d
    productpage                10.0.0.120           9080/TCP             6m
    ratings                    10.0.0.15            9080/TCP             6m
    reviews                    10.0.0.170           9080/TCP             6m
  5. Confirm all Pods are running:

    kubectl get pods
    

    The expected output is similar to:

    NAME                                        READY     STATUS    RESTARTS   AGE
    details-v1-1520924117-48z17                 2/2       Running   0          6m
    productpage-v1-560495357-jk1lz              2/2       Running   0          6m
    ratings-v1-734492171-rnr5l                  2/2       Running   0          6m
    reviews-v1-874083890-f0qf0                  2/2       Running   0          6m
    reviews-v2-1343845940-b34q5                 2/2       Running   0          6m
    reviews-v3-1813607990-8ch52                 2/2       Running   0          6m
  6. Confirm that the Bookinfo application is running:

    kubectl exec -it $(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}') -c ratings -- curl productpage:9080/productpage | grep -o "<title>.*</title>"
    

    Expected output:

    <title>Simple Bookstore App</title>
  7. Define the ingress gateway and virtual service for the application:

    kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
    
  8. Confirm that the gateway was created:

    kubectl get gateway
    

    The expected output is similar to:

    NAME                AGE
    bookinfo-gateway    32s

External requests

Bookinfo's Gateway resource (defined in samples/bookinfo/networking/bookinfo-gateway.yaml) uses the preconfigured istio-ingressgateway. Recall that when you deployed Anthos Service Mesh, you specified NodePort for the istio-ingressgateway, which opens a specific port on the service mesh. Until you set up the load balancer, the Bookinfo application isn't accessible outside of your GKE cluster (such as from a browser). Although the nodes in your cluster have external IP addresses, requests coming from outside your cluster are blocked by Google Cloud firewall rules. With IAP, the correct way to expose this application to the public internet is by using a load balancer. Don't expose the node addresses using firewall rules, which would bypass IAP.

To route requests to Bookinfo, you set up an HTTP(S) load balancer in your Cloud project. Because the load balancer is in your project, it is inside of the firewall and can access the nodes in your cluster. After you configure the load balancer with the static IP address and your domain name, you can send requests to the domain name, and the load balancer forwards the requests to the nodes in the cluster.

Deploying the load balancer

You can use an Ingress resource to create an HTTP(S) load balancer with automatically configured SSL certificates. Google-managed SSL certificates are provisioned, renewed, and managed for your domain.

  1. Create a ManagedCertificate resource. This resource specifies the domain for the SSL certificate. The spec.domains list must contain only one domain. Wildcard domains aren't supported.

    cat <<EOF | kubectl apply -f -
    apiVersion: networking.gke.io/v1beta1
    kind: ManagedCertificate
    metadata:
      name: example-certificate
    spec:
      domains:
        - ${DOMAIN_NAME}
    EOF
  2. Create the load balancer by defining the Ingress resource.

    • Set the networking.gke.io/managed-certificates annotation to the name of the certificate you created in the previous step, example-certificate.

    • Set the kubernetes.io/ingress.global-static-ip-name annotation to the name of the static IP address you reserved, example-static-ip.

    • Set the serviceName to istio-ingressgateway, which is used in the Gateway resource for the Bookinfo sample.

    cat <<EOF | kubectl create -f -
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: example-ingress
      namespace: istio-system
      annotations:
        kubernetes.io/ingress.global-static-ip-name: example-static-ip
        networking.gke.io/managed-certificates: example-certificate
    spec:
      backend:
        serviceName: istio-ingressgateway
        servicePort: 80
    EOF
  3. In the Cloud Console, go to the Kubernetes Engine > Services & Ingress page.

    Go to the Services & Ingress page

    You should see the "Creating ingress" message in the Status column. Wait for GKE to fully provision the Ingress before continuing. Refresh the page every few minutes to get the most up-to-date status on the Ingress. After the Ingress is provisioned, you might see the "Ok" status, or the error "All backend services are in UNHEALTHY state." One of the resources that GKE provisions is a default health check. If you see the error message, that indicates that the Ingress is provisioned and that the default health check ran. When you see either the "Ok" status or the error, continue with the next section to configure the health checks for the load balancer.

Configure health checks for the load balancer.

To configure the health checks, you need to obtain the ID of the default health check created by the Ingress and then update the health check to use istio-ingress's health check path and port.

  1. Get new user credentials to use for Application Default Credentials:

      gcloud auth application-default login

  2. Obtain the ID of the default health check created by the Ingress:

    1. Set the following environment variables:

      • Backend Service: Bridges various Instance Groups on a given Service NodePort.

        BACKEND_SERVICE=$(gcloud compute url-maps list | grep example-ingress | awk '{print $2}' | cut -d'/' -f 2)

      • Health check: This is the default health check that is created automatically when the Ingress is deployed.

        HC=$(gcloud compute backend-services describe ${BACKEND_SERVICE} --global | grep healthChecks | cut -d'/' -f 10 | tail -n 1)

      • Health check ingress port: This is the health check port of istio-ingress.

        export HC_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="status-port")].nodePort}')

      • Health check ingress path: This is the health check path of istio-ingress.

        export HC_INGRESS_PATH=$(kubectl -n istio-system get deployments istio-ingressgateway -o jsonpath='{.spec.template.spec.containers[?(@.name=="istio-proxy")].readinessProbe.httpGet.path}')

      • Health check API: This is the API that you call to configure the health check.
        export HC_API=https://compute.googleapis.com/compute/v1/projects/${PROJECT_ID}/global/healthChecks/${HC}

    2. Get the default health check into a JSON file by calling the healthChecks API:

      curl --request GET  --header "Authorization: Bearer $(gcloud auth application-default print-access-token)" ${HC_API} > health_check.json
  3. Update the health check to use istio-ingress's health check path and port:

    1. Update the health_check.json file as follows:

      • Set httpHealthCheck.port to the value of ${HC_INGRESS_PORT}.
      • Set httpHealthCheck.requestPath to the value of ${HC_INGRESS_PATH}.
      • Add the following attribute and set it to an empty string: httpHealthCheck.portSpecification=""

      The easiest way to do this is to use jq, which comes preinstalled on Cloud Shell:

      jq ".httpHealthCheck.port=${HC_INGRESS_PORT} | .httpHealthCheck.requestPath=\"${HC_INGRESS_PATH}\" | .httpHealthCheck.portSpecification=\"\"" health_check.json > updated_health_check.json

      If you run cat on the resulting updated_health_check.json file, it looks similar to the following:

      {
      "id": "5062913090021441698",
      "creationTimestamp": "2019-11-12T10:47:41.934-08:00",
      "name": "${HC}",
      "description": "Default kubernetes L7 Loadbalancing health check.",
      "checkIntervalSec": 60,
      "timeoutSec": 60,
      "unhealthyThreshold": 10,
      "healthyThreshold": 1,
      "type": "HTTP",
      "httpHealthCheck": {
        "port": 32394,
        "requestPath": "/healthz/ready",
        "proxyHeader": "NONE",
        "portSpecification": ""
      },
      "selfLink": "https://www.googleapis.com/compute/v1/projects/${PROJECT_ID}/global/healthChecks/${HC}",
      "kind": "compute#healthCheck"
      }
      

      If you edited the JSON file manually instead of using the jq command, save the file as updated_health_check.json so that it matches the filename in the next command.

    2. Update the health check:

      curl --request PATCH --header "Authorization: Bearer $(gcloud auth application-default print-access-token)" --header "Content-Type: application/json" --data @updated_health_check.json ${HC_API}

    It takes several minutes for GKE to update the health check. In the Cloud Console, refresh the Kubernetes Engine > Services & Ingress page every minute or so until the status for the Ingress changes to "Ok."

  4. Test the load balancer. Point your browser to:

    http://YOUR_DOMAIN_NAME/productpage

    where YOUR_DOMAIN_NAME is the domain name that you configured with the external static IP address.

    You should see the Bookinfo application's productpage. If you refresh the page several times, you should see different versions of reviews, presented in a round robin style: red stars, black stars, no stars.

    You should also test https access to Bookinfo.

Enabling IAP

The following steps describe how to enable IAP.

  1. Check if you already have an existing brand by using the list command. Only one brand exists per project.

    gcloud alpha iap oauth-brands list
    

    The following is an example gcloud response, if the brand exists:

    name: projects/[PROJECT_NUMBER]/brands/[BRAND_ID]
    applicationTitle: [APPLICATION_TITLE]
    supportEmail: [SUPPORT_EMAIL]
    orgInternalOnly: true
    
  2. If no brand exists, use the create command:

    gcloud alpha iap oauth-brands create --applicationTitle=APPLICATION_TITLE --supportEmail=SUPPORT_EMAIL
    

    The above fields are required when calling this API:

    • supportEmail: The support email displayed on the OAuth consent screen. This email address can either be a user's address or a Google Groups alias. While service accounts also have an email address, they are not actual valid email addresses, and cannot be used when creating a brand. However, a service account can be the owner of a Google Group. Either create a new Google Group or configure an existing group and set the desired service account as an owner of the group.

    • applicationTitle: The application name displayed on OAuth consent screen.

    The response contains the following fields:

    name: projects/[PROJECT_NUMBER]/brands/[BRAND_NAME]
    applicationTitle: [APPLICATION_TITLE]
    supportEmail: [SUPPORT_EMAIL]
    orgInternalOnly: true
    

Creating an IAP OAuth Client

  1. Use the create command to create a client. Use the brand name from previous step.

    gcloud alpha iap oauth-clients create projects/PROJECT-ID/brands/BRAND-ID --display_name=NAME
    

    The response contains the following fields:

    name: projects/[PROJECT_NUMBER]/brands/[BRAND_NAME]/identityAwareProxyClients/[CLIENT_ID]
    secret: [CLIENT_SECRET]
    displayName: [NAME]
    

Turning on IAP for your service

Use the following command to turn on IAP for your service. Replace CLIENT_ID and CLIENT_SECRET with your OAuth client ID and client secret from the client you created previously.

gcloud beta iap web enable \
    --oauth2-client-id=CLIENT_ID \
    --oauth2-client-secret=CLIENT_SECRET \
    --resource-type=backend-services \
    --service=${BACKEND_SERVICE}

Configure the IAP access list

Add a user to the access policy for IAP:

gcloud beta iap web add-iam-policy-binding \
    --member=user:EMAIL_ADDRESS \
    --role=roles/iap.httpsResourceAccessor \
    --resource-type=backend-services \
    --service=$BACKEND_SERVICE

where EMAIL_ADDRESS is the user's full email address such as alice@example.com.

Enable RCToken support on the service mesh

By default, IAP generates a JSON Web Token (JWT) that is scoped to the OAuth client. For Anthos Service Mesh, you can configure IAP to generate a RequestContextToken (RCToken), which is a JWT but with a configurable audience. RCToken lets you configure the audience of the JWT to an arbitrary string, which can be used in the Anthos Service Mesh policies for fine-grained authorization.

To configure the RCToken:

  1. Create an environment variable for your project number. This is the number that was automatically generated and assigned to your project when you created it. (This isn't the same as the project ID.)

    export PROJECT_NUMBER=YOUR_PROJECT_NUMBER
  2. Create an environment variable for the RCToken audience. This can be any string that you want.

    export RCTOKEN_AUD="your-rctoken-aud"
    
  3. Fetch the existing IAP settings

    gcloud beta iap settings get --format json \
    --project=${PROJECT_NUMBER} --resource-type=compute \
    --service=${BACKEND_SERVICE} > iapSettings.json
    
  4. Update IapSettings with the RCToken audience.

    cat iapSettings.json | jq --arg RCTOKEN_AUD_STR $RCTOKEN_AUD \
    '. + {applicationSettings: {csmSettings: {rctokenAud: $RCTOKEN_AUD_STR}}}' \
    > updatedIapSettings.json
    
    gcloud beta iap settings set updatedIapSettings.json --format json \
    --project=${PROJECT_NUMBER} --resource-type=compute --service=${BACKEND_SERVICE}
    
  5. Enable RCToken authentication on the Istio ingress gateway.

    cat <<EOF | kubectl apply -f -
    apiVersion: "authentication.istio.io/v1alpha1"
    kind: "Policy"
    metadata:
      name: "ingressgateway"
      namespace: istio-system
    spec:
      targets:
      - name: "istio-ingressgateway"
      origins:
      - jwt:
          issuer: "https://cloud.google.com/iap"
          jwksUri: "https://www.gstatic.com/iap/verify/public_key-jwk"
          audiences:
          - "$RCTOKEN_AUD"
          jwt_headers:
          - "ingress-authorization"
          trigger_rules:
          - excluded_paths:
            - exact: /healthz/ready
      principalBinding: USE_ORIGIN
    EOF
  6. Make sure requests to the Bookinfo productpage are still successful:

    http://DOMAIN_NAME/productpage

To test the policy:

  1. Create an IapSettings request object, but set the rctokenAud to a different string:

    echo $(cat <<EOF
    {
       "name": "projects/${PROJECT_NUMBER}/iap_web/compute/services/${BACKEND_SERVICE}",
       "applicationSettings": {
         "csmSettings": {
           "rctokenAud": "some-other-arbitrary-string"
         }
       }
     }
    EOF
    ) > request.txt
  2. Call the IapSettings API to set the RCtoken audience.

    curl --request PATCH --header "Authorization: Bearer $(gcloud beta auth application-default print-access-token)" ${IAP_SETTINGS_API}
  3. Make a request to the Bookinfo productpage and it should fail:

    http://DOMAIN_NAME/productpage

Enabling Pod Security Policies

By enabling pod security policies, you make sure that compromised namespaces (other than istio-system) don't impact the security of other namespaces that are sharing the same nodes. Sample PodSecurityPolicy resource files that work with Mesh CA are provided with Anthos Service Mesh. You can modify these files as needed. In the following, you first apply the pod security policies, and then enable the pod security policy for the GKE cluster.

  1. Apply the default Pod Security Policy for all the service accounts in the cluster:

    kubectl apply -f "samples/security/psp/all-pods-psp.yaml"
    
  2. Apply the pod security policy to secure the Secret Discovery Service (SDS):

    kubectl apply -f "samples/security/psp/citadel-agent-psp.yaml"
    

    This gives the Citadel agent (also referred to as the Node Agent) the privilege to create the UDS path /var/run/sds on the host VM.

  3. Run the following command to enable the pod security policy:

    gcloud beta container clusters update ${CLUSTER_NAME} \
        --enable-pod-security-policy
    

    Enabling the pod security policies might take several minutes. During this process, existing workloads won't be able to connect to the Kubernetes master. Wait until the Kubernetes master is up again. You can check the cluster status in the Google Cloud Console on the Kubernetes clusters page.

    For more information, see Using pod security policies.

Cleaning up

After completing this tutorial, remove the following resources to prevent unwanted charges incurring on your account:

  1. Delete the managed certificate:

    kubectl delete managedcertificates example-certificate
  2. Delete the Ingress, which deallocates the load balancing resources:

    kubectl -n istio-system delete ingress example-ingress

  3. Delete the static IP address:

    gcloud compute addresses delete example-static-ip --global

    If you do this, be sure to delete the IP address from your domain registrar.

  4. Delete the cluster, which deletes the resources that make up the cluster, such as the compute instances, disks and network resources:

    gcloud container clusters delete ${CLUSTER_NAME}