You are viewing documentation for Anthos Service Mesh 1.4. View the latest documentation.

Integrating IAP with Anthos Service Mesh

This tutorial describes how to integrate Identity-Aware Proxy (IAP) with Anthos Service Mesh. The IAP integration with Anthos Service Mesh enables you to safely access services based on Google's BeyondCorp principles. IAP verifies user identity and context of the request to determine if a user should be allowed to access an application or resource. The IAP integration with Anthos Service Mesh provides you with the following benefits:

  • Complete context-aware access control to the workloads running on Anthos Service Mesh. You can set fine-grained access policies based on attributes of the originating request, such as user identity, the IP address, and device type. You can combine your access policies with restrictions based on the hostname and path of a request URL.

  • Enable support for context-aware claims in Anthos Service Mesh authorization.

  • Scalable, secure, and highly available access to your application through a Google Cloud load balancer. High performance load balancing provides built-in protection of distributed denial-of-service (DDoS) attacks and support for global anycast IP addressing.


  • Get set up:
    1. Set up your Cloud project to grant the permissions and enable the Google APIs required by IAP.
    2. Reserve an external static IP address and configure a domain name to use the IP address, which the load balancer needs.
    3. Set up a new Google Kubernetes Engine (GKE) cluster with the options required to integrate IAP with Anthos Service Mesh.
    4. Install Anthos Service Mesh with the options required for the integration.
    5. Deploy a sample application.
    6. Deploy the load balancer.
  • Enable IAP.

  • Enable RCToken support on the service mesh.


This tutorial uses the following billable components of Google Cloud:

To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial.

When you finish this tutorial, you can avoid continued billing by deleting the resources you created. For more information, see Cleaning up.

Before you begin


  • You must have an Anthos trial license or subscription. See the Anthos Pricing guide for details.

  • Your GKE cluster must meet the following requirements:

    • At least four nodes.
    • The minimum machine type is e2-standard-4, which has four vCPUs.
    • Use a release channel rather than a static version of GKE
  • To be included in the service mesh, service ports must be named, and the name must include the port's protocol in the following syntax: name: protocol[-suffix] where the square brackets indicate an optional suffix that must start with a dash. For more information, Naming service ports.

  • If you are installing Anthos Service Mesh on a private cluster, you must add a firewall rule to open port 9443 if you want to use automatic sidecar injection. If you don't add the firewall rule and automatic sidecar injection is enabled, you get an error when you deploy workloads. For details on adding a firewall rule, see Adding firewall rules for specific use cases.

  • If you have created a service perimeter in your organization, you might need to add the Mesh CA service to the perimeter. See Adding Mesh CA to a service perimeter for more information.

Setting up your environment

For installations on Google Kubernetes Engine, you can follow the installation guides using Cloud Shell, an in-browser command line interface to your Google Cloud resources, or your own computer running Linux or macOS.

Option A: Use Cloud Shell

Cloud Shell provisions a g1-small Compute Engine virtual machine (VM) running a Debian-based Linux operating system. The advantages to using Cloud Shell are:

  • Cloud Shell includes the gcloud, kubectl and helm command-line tools that you need.

  • Your Cloud Shell $HOME directory has 5GB persistent storage space.

  • You have your choice of text editors:

    • Code editor, which you access by clicking at the top of the Cloud Shell window.

    • Emacs, Vim, or Nano, which you access from the command line in Cloud Shell.

To use Cloud Shell:

  1. Go to the Cloud Console.
  2. Select your Cloud project.
  3. Click the Activate Cloud Shell button at the top of the Cloud Console window.

    Google Cloud Platform console

    A Cloud Shell session opens inside a new frame at the bottom of the Cloud Console and displays a command-line prompt.

    Cloud Shell session

  4. Update the components:

    gcloud components update

    The command responds with output similar to the following:

    ERROR: (gcloud.components.update)
    You cannot perform this action because the Cloud SDK component manager
    is disabled for this installation. You can run the following command
    to achieve the same result for this installation:
    sudo apt-get update && sudo apt-get --only-upgrade install ...
  5. Copy the long command and paste it to update the components.

  6. Install kubectl:

    sudo apt-get install kubectl
  7. Install kpt:

    sudo apt-get install google-cloud-sdk-kpt

Option B: Use command-line tools locally

On your local machine, install and initialize the Cloud SDK (the gcloud command-line tool).

If you already have the Cloud SDK installed:

  1. Authenticate with the Cloud SDK:

    gcloud auth login
  2. Update the components:

    gcloud components update
  3. Install kubectl:

    gcloud components install kubectl
  4. Install kpt:

    gcloud components install kpt

Setting up your project

  1. Get the project ID of the project that the cluster will be created in:


    gcloud projects list


    1. In the Cloud Console, go to the Dashboard page:

      Go to the Dashboard page

    2. Click the Select from drop-down list at the top of the page. In the Select from window that appears, select your project.

      The project ID is displayed on the project Dashboard Project info card.

  2. Create an environment variable for the project ID:
  3. Set the default project ID for the gcloud command-line tool:
    gcloud config set project ${PROJECT_ID}
  4. Create an environment variable for the project number:
    export PROJECT_NUMBER=$(gcloud projects describe ${PROJECT_ID} --format="value(projectNumber)")

  5. Set the required Identity and Access Management (IAM) roles. If you are a Project Owner, you have all the necessary permissions to complete the installation and register your cluster with your environ. If you aren't a Project Owner, you need someone who is to grant you the following specific IAM roles. In the following command, change GCP_EMAIL_ADDRESS to the account that you use to log in to Google Cloud.
    gcloud projects add-iam-policy-binding ${PROJECT_ID} \
         --member user:GCP_EMAIL_ADDRESS \
         --role=roles/editor \
         --role=roles/compute.admin \
         --role=roles/container.admin \
         --role=roles/resourcemanager.projectIamAdmin \
         --role=roles/iam.serviceAccountAdmin \
         --role=roles/iam.serviceAccountKeyAdmin \

    To learn more about how to grant IAM roles, refer to Granting, changing, and revoking access to resources. For a description of these roles, see Permissions required to install Anthos Service Mesh

  6. Enable the following APIs:
    gcloud services enable \ \ \ \ \ \ \ \ \ \ \ \ \ \

    Enabling the APIs can take a minute or more to complete. When the APIs are enabled, you see output similar to the following:

    Operation "operations/acf.601db672-88e6-4f98-8ceb-aa3b5725533c" finished

Reserve a static IP address and configure DNS

To integrate Identity-Aware Proxy with Anthos Service Mesh, you have to set up a Google Cloud HTTP(S) load balancer, which requires a domain name that points to a static IP address. You can reserve a static external IP address, which assigns the address to your project indefinitely until you explicitly release it.

  1. Reserve a static external IP address:

    gcloud compute addresses create example-static-ip --global
  2. Get the static IP address:

    gcloud compute addresses describe example-static-ip --global
  3. In your domain name registrar, configure a fully qualified domain name (FQDN) with the static IP address. Typically, you add an A record to your DNS settings. The configuration steps and terminology for adding an A record for a FQDN vary depending on your domain name registrar.

  4. Set the domain name in an environment variable:


    It can take 24 to 48 hours for the DNS setting to propagate. You can continue setting up everything in this tutorial, but you won't be able to test the setup until the DNS settings propagate.

Setting up a new GKE cluster

This section explains the basics of creating a GKE cluster with the options that are required for Anthos Service Mesh. For more information, see Creating a cluster.

To set up a new cluster:

  1. Select a zone or region, a machine type, and a GKE release channel for the new cluster. The minimum machine type required by Anthos Service Mesh is e2-standard-4. You can use any release channel option.

    • If you will be creating a single-zone cluster, run the following command to get a list of the available GCP zones:

      gcloud compute zones list
    • If you will be creating a regional cluster, run the following command to get a list of the available regions:

      gcloud compute regions list
    • To get a list of machine types:

      gcloud compute machine-types list | more
  2. Create the following environment variables:

    • Set the cluster name:


      The cluster name must contain only lowercase alphanumerics and '-', must start with a letter and end with an alphanumeric, and must be no longer than 40 characters.

    • Set the CLUSTER_LOCATION to either your cluster zone or cluster region:

    • Set the workload pool:

    • Set the mesh ID:

      export MESH_ID="proj-${PROJECT_NUMBER}"
    • Set the release channel. Replace YOUR_CHANNEL with one of the following:regular, stable or rapid.


      For a description of each channel, see What channels are available.

  3. Set the default zone or region for the gcloud command-line tool.

    • For a single-zone cluster, set the default zone:

      gcloud config set compute/zone ${CLUSTER_LOCATION}
    • For a regional cluster, set the default region:

      gcloud config set compute/region ${CLUSTER_LOCATION}

    Tip: To make setting up your shell environment easier in the future, you can copy and paste the export statements for each environment variable to a simple shell script that you source when you start a new shell. You can also add the gcloud commands that set default values to the script. Or you can use gcloud init to create and activate a named gcloud configuration.

  4. Create the cluster with the options required by Anthos Service Mesh. The following command creates a cluster containing 4 nodes of machine type e2-standard-4, which has 4 vCPUs. This is the minimum machine type and number of nodes required for Anthos Service Mesh. You can specify another machine type as long as it has at least 4 vCPUs, and you can increase the number of nodes as needed for your system requirements.

    gcloud beta container clusters create ${CLUSTER_NAME} \
        --machine-type=e2-standard-4 \
        --num-nodes=4 \
        --workload-pool=${WORKLOAD_POOL} \
        --enable-stackdriver-kubernetes \
        --subnetwork=default \
        --labels=mesh_id=${MESH_ID} \

    The clusters create command includes:

    • workload-pool=${WORKLOAD_POOL}: Enables Workload Identity, which is the recommended way to safely access Google Cloud services from GKE applications.

    • enable-stackdriver-kubernetes: Enables Cloud Monitoring and Cloud Logging on GKE.

    • subnetwork=default: Creates a default subnetwork.

    • labels mesh_id=${MESH_ID}: Sets the mesh_id label on the cluster, which is required for metrics to get displayed on the Anthos Service Mesh pages in the Cloud Console.

    • release-channel ${CHANNEL}: Enrolls the cluster in the specified release channel.

Preparing to install Anthos Service Mesh

Before continuing, verify that the ASM Mesh Data Plane Service Account is a member of the project:

gcloud projects get-iam-policy ${PROJECT_ID} | grep -B 1 'roles/meshdataplane.serviceAgent'

If the previous command doesn't output anything, go back to the Set credentials and permissions section and run the curl command.


  1. Download the Anthos Service Mesh installation file to your current working directory:
    curl -LO
  2. Download the signature file and use openssl to verify the signature:
    curl -LO
    openssl dgst -verify - -signature -linux.tar.gz.1.sig -linux.tar.gz <<'EOF'
    -----BEGIN PUBLIC KEY-----
    -----END PUBLIC KEY-----

    The expected output is: Verified OK

  3. Mac OS

  4. Download the Anthos Service Mesh installation file to your current working directory:
    curl -LO
  5. Download the signature file and use openssl to verify the signature:
    curl -LO
    openssl dgst -sha256 -verify /dev/stdin -signature -osx.tar.gz.1.sig -osx.tar.gz <<'EOF'
    -----BEGIN PUBLIC KEY-----
    -----END PUBLIC KEY-----

    The expected output is: Verified OK

  6. Windows

  7. Download the Anthos Service Mesh installation file to your current working directory:
    curl -LO
  8. Download the signature file and use openssl to verify the signature:
    curl -LO
    openssl dgst -verify - -signature <<'EOF'
    -----BEGIN PUBLIC KEY-----
    -----END PUBLIC KEY-----

    The expected output is: Verified OK

  9. Extract the contents of the file to any location on your file system. For example, to extract the contents to the current working directory:
    tar xzf -linux.tar.gz

    The command creates an installation directory in your current working directory named that contains:

    • Sample applications in samples
    • The following tools in the bin directory:
      • istioctl: You use istioctl to install Anthos Service Mesh.
      • asmctl: You use asmctl to help validate your security configuration after installing Anthos Service Mesh. (Currently, asmctl isn't supported on GKE on-prem.)

  10. Ensure that you're in the Anthos Service Mesh installation's root directory.
  11. For convenience, add the tools in the /bin directory to your PATH:
    export PATH=$PWD/bin:$PATH

Installing Anthos Service Mesh

Install Anthos Service Mesh and set the options needed to integrate Anthos Service Mesh with IAP.


istioctl manifest apply --set profile=asm \
  --set values.gateways.istio-ingressgateway.type=NodePort \
  --set${WORKLOAD_POOL} \
  --set${WORKLOAD_POOL} \
  --set values.nodeagent.env.GKE_CLUSTER_URL=${PROJECT_ID}/locations/${CLUSTER_LOCATION}/clusters/${CLUSTER_NAME} \
  --set${MESH_ID} \


istioctl manifest apply --set profile=asm \
  --set values.gateways.istio-ingressgateway.type=NodePort \
  --set${WORKLOAD_POOL} \
  --set${WORKLOAD_POOL} \
  --set values.nodeagent.env.GKE_CLUSTER_URL=${PROJECT_ID}/locations/${CLUSTER_LOCATION}/clusters/${CLUSTER_NAME} \
  --set${MESH_ID} \

You specify NodePort for the istio-ingressgateway, which configures {[mesh_name]} to open a specific port on the service mesh. This allows you to set up a load balancer, which routes traffic sent to your domain name to this port. The other options enable Anthos Service Mesh certificate authority (Mesh CA).

Check the control plane components

Check that the control plane Pods in istio-system are up:

kubectl get pod -n istio-system

Expect to see output similar to the following:

NAME                                      READY   STATUS      RESTARTS   AGE
istio-galley-5c65896ff7-m2pls             2/2     Running     0          18m
istio-ingressgateway-587cd459f-q6hqt      2/2     Running     0          18m
istio-nodeagent-74w69                     1/1     Running     0          18m
istio-nodeagent-7524w                     1/1     Running     0          18m
istio-nodeagent-7652w                     1/1     Running     0          18m
istio-nodeagent-7948w                     1/1     Running     0          18m
istio-pilot-9db77b99f-7wfb6               2/2     Running     0          18m
istio-sidecar-injector-69c4d9f875-dt8rn   1/1     Running     0          18m
promsd-55f464d964-lqs7w                   2/2     Running     0          18m

You should see an instance of the istio-nodeagent for each node in your cluster. Mesh CA, which replaces the Citadel OSS Istio component, creates the node agents to issue mTLS certificates for the workloads running in your service mesh.

Deploying a sample application

Before you enable IAP, you need an application running on your GKE cluster so you can verify that all requests have an identity. This guide uses the Bookinfo sample to demonstrate how to setup the HTTP(S) load balancer and enable IAP.

Start the application services

  1. Change directory to the root of the Anthos Service Mesh installation.

  2. Label the default namespace to use automatic sidecar injection:

    kubectl label namespace default istio-injection=enabled
  3. Deploy the application:

    kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
  4. Confirm all bookinfo services are running:

    kubectl get services

    The expected output is similar to:

    NAME                       CLUSTER-IP   EXTERNAL-IP   PORT(S)              AGE
    details                      9080/TCP             6m
    kubernetes                    443/TCP              7d
    productpage                 9080/TCP             6m
    ratings                      9080/TCP             6m
    reviews                     9080/TCP             6m
  5. Confirm all Pods are running:

    kubectl get pods

    The expected output is similar to:

    NAME                                        READY     STATUS    RESTARTS   AGE
    details-v1-1520924117-48z17                 2/2       Running   0          6m
    productpage-v1-560495357-jk1lz              2/2       Running   0          6m
    ratings-v1-734492171-rnr5l                  2/2       Running   0          6m
    reviews-v1-874083890-f0qf0                  2/2       Running   0          6m
    reviews-v2-1343845940-b34q5                 2/2       Running   0          6m
    reviews-v3-1813607990-8ch52                 2/2       Running   0          6m
  6. Confirm that the Bookinfo application is running:

    kubectl exec -it $(kubectl get pod -l app=ratings -o jsonpath='{.items[0]}') -c ratings -- curl productpage:9080/productpage | grep -o "<title>.*</title>"

    Expected output:

    <title>Simple Bookstore App</title>
  7. Define the ingress gateway and virtual service for the application:

    kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
  8. Confirm that the gateway was created:

    kubectl get gateway

    The expected output is similar to:

    NAME                AGE
    bookinfo-gateway    32s

External requests

Bookinfo's Gateway resource (defined in samples/bookinfo/networking/bookinfo-gateway.yaml) uses the preconfigured istio-ingressgateway. Recall that when you deployed Anthos Service Mesh, you specified NodePort for the istio-ingressgateway, which opens a specific port on the service mesh. Until you set up the load balancer, the Bookinfo application isn't accessible outside of your GKE cluster (such as from a browser). Although the nodes in your cluster have external IP addresses, requests coming from outside your cluster are blocked by Google Cloud firewall rules. With IAP, the correct way to expose this application to the public internet is by using a load balancer. Don't expose the node addresses using firewall rules, which would bypass IAP.

To route requests to Bookinfo, you set up an HTTP(S) load balancer in your Cloud project. Because the load balancer is in your project, it is inside of the firewall and can access the nodes in your cluster. After you configure the load balancer with the static IP address and your domain name, you can send requests to the domain name, and the load balancer forwards the requests to the nodes in the cluster.

Deploying the load balancer

You can use an Ingress resource to create an HTTP(S) load balancer with automatically configured SSL certificates. Google-managed SSL certificates are provisioned, renewed, and managed for your domain.

  1. Create a ManagedCertificate resource. This resource specifies the domain for the SSL certificate. The list must contain only one domain. Wildcard domains aren't supported.

    cat <<EOF | kubectl apply -f -
    kind: ManagedCertificate
      name: example-certificate
        - ${DOMAIN_NAME}
  2. Create the load balancer by defining the Ingress resource.

    • Set the annotation to the name of the certificate you created in the previous step, example-certificate.

    • Set the annotation to the name of the static IP address you reserved, example-static-ip.

    • Set the serviceName to istio-ingressgateway, which is used in the Gateway resource for the Bookinfo sample.

    cat <<EOF | kubectl create -f -
    apiVersion: extensions/v1beta1
    kind: Ingress
      name: example-ingress
      namespace: istio-system
      annotations: example-static-ip example-certificate
        serviceName: istio-ingressgateway
        servicePort: 80
  3. In the Cloud Console, go to the Kubernetes Engine > Services & Ingress page.

    Go to the Services & Ingress page

    You should see the "Creating ingress" message in the Status column. Wait for GKE to fully provision the Ingress before continuing. Refresh the page every few minutes to get the most up-to-date status on the Ingress. After the Ingress is provisioned, you might see the "Ok" status, or the error "All backend services are in UNHEALTHY state." One of the resources that GKE provisions is a default health check. If you see the error message, that indicates that the Ingress is provisioned and that the default health check ran. When you see either the "Ok" status or the error, continue with the next section to configure the health checks for the load balancer.

Configure health checks for the load balancer.

To configure the health checks, you need to obtain the ID of the default health check created by the Ingress and then update the health check to use istio-ingress's health check path and port.

  1. Get new user credentials to use for Application Default Credentials:

      gcloud auth application-default login

  2. Obtain the ID of the default health check created by the Ingress:

    1. Set the following environment variables:

      • Backend Service: Bridges various Instance Groups on a given Service NodePort.

        BACKEND_SERVICE=$(gcloud compute url-maps list | grep example-ingress | awk '{print $2}' | cut -d'/' -f 2)

      • Health check: This is the default health check that is created automatically when the Ingress is deployed.

        HC=$(gcloud compute backend-services describe ${BACKEND_SERVICE} --global | grep healthChecks | cut -d'/' -f 10 | tail -n 1)

      • Health check ingress port: This is the health check port of istio-ingress.

        export HC_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?("status-port")].nodePort}')

      • Health check ingress path: This is the health check path of istio-ingress.

        export HC_INGRESS_PATH=$(kubectl -n istio-system get deployments istio-ingressgateway -o jsonpath='{.spec.template.spec.containers[?("istio-proxy")].readinessProbe.httpGet.path}')

      • Health check API: This is the API that you call to configure the health check.
        export HC_API=${PROJECT_ID}/global/healthChecks/${HC}

    2. Get the default health check into a JSON file by calling the healthChecks API:

      curl --request GET  --header "Authorization: Bearer $(gcloud auth application-default print-access-token)" ${HC_API} > health_check.json
  3. Update the health check to use istio-ingress's health check path and port:

    1. Update the health_check.json file as follows:

      • Set httpHealthCheck.port to the value of ${HC_INGRESS_PORT}.
      • Set httpHealthCheck.requestPath to the value of ${HC_INGRESS_PATH}.
      • Add the following attribute and set it to an empty string: httpHealthCheck.portSpecification=""

      The easiest way to do this is to use jq, which comes preinstalled on Cloud Shell:

      jq ".httpHealthCheck.port=${HC_INGRESS_PORT} | .httpHealthCheck.requestPath=\"${HC_INGRESS_PATH}\" | .httpHealthCheck.portSpecification=\"\"" health_check.json > updated_health_check.json

      If you run cat on the resulting updated_health_check.json file, it looks similar to the following:

      "id": "5062913090021441698",
      "creationTimestamp": "2019-11-12T10:47:41.934-08:00",
      "name": "${HC}",
      "description": "Default kubernetes L7 Loadbalancing health check.",
      "checkIntervalSec": 60,
      "timeoutSec": 60,
      "unhealthyThreshold": 10,
      "healthyThreshold": 1,
      "type": "HTTP",
      "httpHealthCheck": {
        "port": 32394,
        "requestPath": "/healthz/ready",
        "proxyHeader": "NONE",
        "portSpecification": ""
      "selfLink": "${PROJECT_ID}/global/healthChecks/${HC}",
      "kind": "compute#healthCheck"

      If you edited the JSON file manually instead of using the jq command, save the file as updated_health_check.json so that it matches the filename in the next command.

    2. Update the health check:

      curl --request PATCH --header "Authorization: Bearer $(gcloud auth application-default print-access-token)" --header "Content-Type: application/json" --data @updated_health_check.json ${HC_API}

    It takes several minutes for GKE to update the health check. In the Cloud Console, refresh the Kubernetes Engine > Services & Ingress page every minute or so until the status for the Ingress changes to "Ok."

  4. Test the load balancer. Point your browser to:


    where YOUR_DOMAIN_NAME is the domain name that you configured with the external static IP address.

    You should see the Bookinfo application's productpage. If you refresh the page several times, you should see different versions of reviews, presented in a round robin style: red stars, black stars, no stars.

    You should also test https access to Bookinfo.

Enabling IAP

The following steps describe how to enable IAP.

  1. Check if you already have an existing brand by using the list command. You may only have one brand per project.

    gcloud alpha iap oauth-brands list

    The following is an example gcloud response, if the brand exists:

    name: projects/[PROJECT_NUMBER]/brands/[BRAND_ID]
    applicationTitle: [APPLICATION_TITLE]
    supportEmail: [SUPPORT_EMAIL]
    orgInternalOnly: true
  2. If no brand exists, use the create command:

    gcloud alpha iap oauth-brands create --application_title=APPLICATION_TITLE --support_email=SUPPORT_EMAIL

    The above fields are required when calling this API:

    • supportEmail: The support email displayed on the OAuth consent screen. This email address can either be a user's address or a Google Groups alias. While service accounts also have an email address, they are not actual valid email addresses, and cannot be used when creating a brand. However, a service account can be the owner of a Google Group. Either create a new Google Group or configure an existing group and set the desired service account as an owner of the group.

    • applicationTitle: The application name displayed on OAuth consent screen.

    The response contains the following fields:

    name: projects/[PROJECT_NUMBER]/brands/[BRAND_NAME]
    applicationTitle: [APPLICATION_TITLE]
    supportEmail: [SUPPORT_EMAIL]
    orgInternalOnly: true

Creating an IAP OAuth Client

  1. Use the create command to create a client. Use the brand name from previous step.

    gcloud alpha iap oauth-clients create projects/PROJECT-ID/brands/BRAND-ID --display_name=NAME

    The response contains the following fields:

    name: projects/[PROJECT_NUMBER]/brands/[BRAND_NAME]/identityAwareProxyClients/[CLIENT_ID]
    secret: [CLIENT_SECRET]
    displayName: [NAME]

Turning on IAP for your service

Use the following command to turn on IAP for your service. Replace CLIENT_ID and CLIENT_SECRET with your OAuth client ID and client secret from the client you created previously.

gcloud beta iap web enable \
    --oauth2-client-id=CLIENT_ID \
    --oauth2-client-secret=CLIENT_SECRET \
    --resource-type=backend-services \

Configure the IAP access list

Add a user to the access policy for IAP:

gcloud beta iap web add-iam-policy-binding \
    --member=user:EMAIL_ADDRESS \
    --role=roles/iap.httpsResourceAccessor \
    --resource-type=backend-services \

where EMAIL_ADDRESS is the user's full email address such as

Enable RCToken support on the service mesh

By default, IAP generates a JSON Web Token (JWT) that is scoped to the OAuth client. For Anthos Service Mesh, you can configure IAP to generate a RequestContextToken (RCToken), which is a JWT but with a configurable audience. RCToken lets you configure the audience of the JWT to an arbitrary string, which can be used in the Anthos Service Mesh policies for fine-grained authorization.

To configure the RCToken:

  1. Create an environment variable for your project number. This is the number that was automatically generated and assigned to your project when you created it. (This isn't the same as the project ID.)

  2. Create an environment variable for the RCToken audience. This can be any string that you want.

    export RCTOKEN_AUD="your-rctoken-aud"
  3. Fetch the existing IAP settings

    gcloud beta iap settings get --format json \
    --project=${PROJECT_NUMBER} --resource-type=compute \
    --service=${BACKEND_SERVICE} > iapSettings.json
  4. Update IapSettings with the RCToken audience.

    cat iapSettings.json | jq --arg RCTOKEN_AUD_STR $RCTOKEN_AUD \
    '. + {applicationSettings: {csmSettings: {rctokenAud: $RCTOKEN_AUD_STR}}}' \
    > updatedIapSettings.json
    gcloud beta iap settings set updatedIapSettings.json --format json \
    --project=${PROJECT_NUMBER} --resource-type=compute --service=${BACKEND_SERVICE}
  5. Enable RCToken authentication on the Istio ingress gateway.

    cat <<EOF | kubectl apply -f -
    apiVersion: ""
    kind: "Policy"
      name: "ingressgateway"
      namespace: istio-system
      - name: "istio-ingressgateway"
      - jwt:
          issuer: ""
          jwksUri: ""
          - "$RCTOKEN_AUD"
          - "ingress-authorization"
          - excluded_paths:
            - exact: /healthz/ready
      principalBinding: USE_ORIGIN
  6. Make sure requests to the Bookinfo productpage are still successful:


To test the policy:

  1. Create an IapSettings request object, but set the rctokenAud to a different string:

    echo $(cat <<EOF
       "name": "projects/${PROJECT_NUMBER}/iap_web/compute/services/${BACKEND_SERVICE}",
       "applicationSettings": {
         "csmSettings": {
           "rctokenAud": "some-other-arbitrary-string"
    ) > request.txt
  2. Call the IapSettings API to set the RCtoken audience.

    curl --request PATCH --header "Authorization: Bearer $(gcloud beta auth application-default print-access-token)" ${IAP_SETTINGS_API}
  3. Make a request to the Bookinfo productpage and it should fail:


Enabling Pod Security Policies

By enabling pod security policies, you make sure that compromised namespaces (other than istio-system) don't impact the security of other namespaces that are sharing the same nodes. Sample PodSecurityPolicy resource files that work with Mesh CA are provided with Anthos Service Mesh. You can modify these files as needed. In the following, you first apply the pod security policies, and then enable the pod security policy for the GKE cluster.

  1. Apply the default Pod Security Policy for all the service accounts in the cluster:

    kubectl apply -f "samples/security/psp/all-pods-psp.yaml"
  2. Apply the pod security policy to secure the Secret Discovery Service (SDS):

    kubectl apply -f "samples/security/psp/citadel-agent-psp.yaml"

    This gives the Citadel agent (also referred to as the Node Agent) the privilege to create the UDS path /var/run/sds on the host VM.

  3. Run the following command to enable the pod security policy:

    gcloud beta container clusters update ${CLUSTER_NAME} \

    Enabling the pod security policies might take several minutes. During this process, existing workloads won't be able to connect to the Kubernetes master. Wait until the Kubernetes master is up again. You can check the cluster status in the Google Cloud Console on the Kubernetes clusters page.

    For more information, see Using pod security policies.

Cleaning up

After completing this tutorial, remove the following resources to prevent unwanted charges incurring on your account:

  1. Delete the managed certificate:

    kubectl delete managedcertificates example-certificate
  2. Delete the Ingress, which deallocates the load balancing resources:

    kubectl -n istio-system delete ingress example-ingress

  3. Delete the static IP address:

    gcloud compute addresses delete example-static-ip --global

    If you do this, be sure to delete the IP address from your domain registrar.

  4. Delete the cluster, which deletes the resources that make up the cluster, such as the compute instances, disks and network resources:

    gcloud container clusters delete ${CLUSTER_NAME}