Deploying internal services using Cloud Run for Anthos deployed on GKE

This tutorial demonstrates how to expose services deployed to Cloud Run for Anthos deployed on GKE on your internal network. This type of configuration allows other resources in your network to communicate with the service using a private, internal (RFC 1918) IP address. Exposing services on an internal network is useful for enterprises that provide internal apps to their staff, and for services that are used by clients that run outside the Cloud Run for Anthos deployed on GKE cluster.

Cloud Run for Anthos deployed on GKE provides a developer-focused experience for deploying and serving apps and functions running on GKE. By default, Cloud Run for Anthos deployed on GKE exposes services outside the cluster on a public IP address by using Istio's ingress gateway. This gateway is a Kubernetes service of type LoadBalancer, which means it's exposed on a public IP address using Network Load Balancing.

Istio also provides an internal load balancing (ILB) gateway. This gateway provides a way for other resources in the same region to access your services by using an internal IP address in your VPC network with Internal TCP/UDP Load Balancing. The Istio add-on for GKE doesn't install the ILB gateway, but you can add it as an extra component. This tutorial shows you how to use this ILB gateway for services deployed to Cloud Run for Anthos deployed on GKE.


  • Create a GKE cluster with Cloud Run enabled.
  • Install the Istio ILB gateway.
  • Update Cloud Run for Anthos deployed on GKE to use the ILB gateway.
  • Test the app by deploying a sample service to Cloud Run for Anthos deployed on GKE.


This tutorial uses the following billable components of Google Cloud:

To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial.

When you finish this tutorial, you can avoid continued billing by deleting the resources you created. For more information, see Cleaning up.

Before you begin

  1. Sign in to your Google Account.

    If you don't already have one, sign up for a new account.

  2. In the Cloud Console, on the project selector page, select or create a Google Cloud project.

    Go to the project selector page

  3. Make sure that billing is enabled for your Google Cloud project. Learn how to confirm billing is enabled for your project.

  4. In the Cloud Console, go to Cloud Shell.

    Go to Cloud Shell

    At the bottom of the Cloud Console, a Cloud Shell session opens and displays a command-line prompt. Cloud Shell is a shell environment with the Cloud SDK already installed, including the gcloud command-line tool, and with values already set for your current project. It can take a few seconds for the session to initialize.

  5. You run all commands in this tutorial from Cloud Shell.
  6. In Cloud Shell, enable the Cloud Run API, GKE API, and Cloud APIs:
    gcloud services enable \ \ \

Setting up the environment

  • In Cloud Shell, define environment variables and the gcloud tool defaults for the Compute Engine zone and GKE cluster name that you want to use for this tutorial:

    gcloud config set compute/zone $ZONE
    gcloud config set run/cluster $CLUSTER
    gcloud config set run/cluster_location $ZONE

    The examples in this tutorial use us-central1-f as the zone and cloudrun-gke-ilb-tutorialas the cluster name. You can use different values. For more information, see Geography and regions.

Creating a GKE cluster with Cloud Run enabled

  1. In Cloud Shell, create a GKE cluster with the Cloud Run and Istio add-ons:

    gcloud beta container clusters create $CLUSTER \
        --machine-type n1-standard-4 \
        --enable-stackdriver-kubernetes \
        --enable-ip-alias \
        --addons HttpLoadBalancing,Istio,CloudRun
  2. Add yourself as a cluster admin so that you can install extra Istio components:

    kubectl create clusterrolebinding cluster-admin-binding \
        --clusterrole cluster-admin \
        --user $(gcloud config get-value core/account)

Installing the Istio ILB gateway

  1. In Cloud Shell, inspect your GKE cluster to find the version of Istio used:

    ISTIO_PACKAGE=$(kubectl -n istio-system get deployments istio-pilot \
        -o jsonpath="{.spec.template.spec.containers[0].image}" | \
        cut -d':' -f2)
    ISTIO_VERSION=$(echo $ISTIO_PACKAGE | cut -d'-' -f1)
  2. Download and extract Istio:

    tar zxf istio-$ISTIO_VERSION-linux.tar.gz
  3. Download and extract Helm:

    tar zxf helm-$HELM_VERSION-linux-amd64.tar.gz
  4. Use Helm's local template rendering to create a Kubernetes manifest that installs the Istio ILB gateway, istio-ilbgateway:

    ./linux-amd64/helm template \
        --set galley.enabled=false \
        --set gateways.enabled=true \
        --set gateways.istio-ingressgateway.enabled=false \
        --set gateways.istio-egressgateway.enabled=false \
        --set gateways.istio-ilbgateway.enabled=true \
        --set gateways.istio-ilbgateway.ports[0].name=https \
        --set gateways.istio-ilbgateway.ports[0].port=443 \
        --set gateways.istio-ilbgateway.ports[1].name=http \
        --set gateways.istio-ilbgateway.ports[1].port=80 \
        --set \
        --set global.omitSidecarInjectorConfigMap=true \
        --set global.tag=$ISTIO_PACKAGE \
        --set mixer.enabled=false \
        --set mixer.policy.enabled=false \
        --set mixer.telemetry.enabled=false \
        --set pilot.enabled=false \
        --set prometheus.enabled=false \
        --set security.enabled=false \
        --set sidecarInjectorWebhook.enabled=false \
        --namespace istio-system \
        istio-$ISTIO_VERSION/install/kubernetes/helm/istio \
        > istio-ilbgateway.yaml

    This command sets several template flags to false to create a manifest file that contains only the objects required to add the ILB gateway to an existing Istio installation. Applying this manifest file doesn't disable any existing Istio components.

  5. Apply the ILB gateway manifest file:

    kubectl apply -f istio-ilbgateway.yaml

Creating TLS certificate for the ILB gateway

  1. In Cloud Shell, create a self-signed Transport Layer Security (TLS) certificate and private key to allow TLS termination by the ILB gateway:

    openssl req -x509 -nodes -newkey rsa:2048 -days 365 \
        -keyout privkey.pem -out cert.pem -subj "/CN=*"
  2. Create a Kubernetes secret called istio-ilbgateway-certs to store the TLS certificate and private key:

    kubectl -n istio-system create secret tls istio-ilbgateway-certs \
        --key privkey.pem --cert cert.pem \
        --dry-run -o yaml | kubectl apply -f -

Configuring Cloud Run for Anthos deployed on GKE to use the ILB gateway

  1. In Cloud Shell, create a patch file to configure TLS settings and to update knative-ingress-gateway to point to the internal Istio ilbgateway instead of the external Istio ingressgateway:

    cat << EOF > knative-ingress-gateway-patch.yaml
        istio: ilbgateway
      - hosts:
        - '*'
          name: http
          number: 80
          protocol: HTTP
      - hosts:
        - '*'
          name: https
          number: 443
          protocol: HTTPS
          mode: SIMPLE
          privateKey: /etc/istio/ilbgateway-certs/tls.key
          serverCertificate: /etc/istio/ilbgateway-certs/tls.crt
  2. Apply the patch:

    kubectl -n knative-serving patch gateway knative-ingress-gateway \
        --type merge -p "$(cat knative-ingress-gateway-patch.yaml)"

Test the service

  1. In Cloud Shell, deploy a service called sample to Cloud Run for Anthos deployed on GKE in the default namespace:

    gcloud beta run deploy sample \
        --image \
        --namespace default \
        --platform gke
  2. Create a Compute Engine virtual machine (VM) instance in the same zone as the GKE cluster:

    gcloud compute instances create $VM
  3. Store the ILB gateway IP address in an environment variable called ILB_IP and a file called ilb-ip.txt:

    export ILB_IP=$(kubectl -n istio-system get service istio-ilbgateway \
      -o jsonpath='{.status.loadBalancer.ingress[0].ip}' | tee ilb-ip.txt)
  4. Copy the file containing the IP address of the ILB gateway to the VM:

    gcloud compute scp ilb-ip.txt $VM:~
  5. Connect to the instance using SSH:

    gcloud compute ssh $VM
  6. While in the SSH session, test the service using HTTPS:

    curl -s -k -w '\n' \
        -H https://$(cat ilb-ip.txt)/

    The output is as follows:

  7. Test the service using HTTP:

    curl -s -w '\n' \
        -H http://$(cat ilb-ip.txt)/
  8. Leave the SSH session:

  9. Send a request to verify that you can't connect from outside the virtual private cloud (VPC) network:

    curl -s -v -k --connect-timeout 5 \
        -H https://$ILB_IP/

    The output is similar to the following:

    *   Trying [$ILB_IP]...
    * TCP_NODELAY set
    * Connection timed out after 5003 milliseconds
    * stopped the pause stream!
    * Closing connection 0

    This connection fails as expected because Cloud Shell runs outside your VPC network.


If you run into problems with this tutorial, review the following documents:

Cleaning up

To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial:

Delete the project

  1. In the Cloud Console, go to the Manage resources page.

    Go to the Manage resources page

  2. In the project list, select the project you want to delete and click Delete .
  3. In the dialog, type the project ID, and then click Shut down to delete the project.

Delete the individual resources

If you want to keep the GCP project you used in this tutorial, delete the individual resources:

  1. Delete the GKE cluster:

    gcloud container clusters delete $CLUSTER --quiet --async
  2. Delete the Compute Engine instance:

    gcloud compute instances delete $VM --quiet

What's next

Var denne side nyttig? Giv os en anmeldelse af den: