Setting up Cloud Run on GKE

This guide shows how to set up a Google Kubernetes Engine cluster with Cloud Run on GKE enabled. Because you can use either the GCP Console or the gcloud command line, the instructions cover both of these.

Note that enabling Cloud Run on GKE installs Istio and Knative Serving into the cluster to connect and manage your stateless workloads.

Prerequisites

  1. Sign in to your Google Account.

    If you don't already have one, sign up for a new account.

  2. Select or create a Google Cloud Platform project.

    Go to the Manage resources page

  3. Make sure that billing is enabled for your Google Cloud Platform project.

    Learn how to enable billing

Setting up gcloud

Although you can use either the GCP Console console or the gcloud command line to use Cloud Run on GKE, you must use the gcloud command line for certain tasks, such as setting up outbound networking.

To set up the gcloud command line for Cloud Run on GKE:

  1. Install and initialize the Cloud SDK.

  2. You should set your default project setting for gcloud to the one you just created:

    gcloud config set project [PROJECT-ID]
    

    Replace [PROJECT-ID] with the project ID of the project you created.

  3. Set ZONE to the desired zone for the new cluster. You can use any zone where GKE is supported, for example:

    gcloud config set compute/zone us-central1-a
    
  4. Enable the following APIs for the project, which are needed to create a cluster, build and publish a container into the Google Container registry:

    gcloud services enable container.googleapis.com containerregistry.googleapis.com cloudbuild.googleapis.com
    
  5. Update installed gcloud components:

    gcloud components update
    
  6. Install the gcloud beta components:

    gcloud components install beta
    
  7. Install the kubectl command-line tool:

    gcloud components install kubectl
    
  8. Kubernetes clusters come with a namespace named default. If you need to create a new namespace, run:

    kubectl create namespace [NAMESPACE]
    

    Replace [NAMESPACE] with the Namespace you want to create.

  9. If you choose to use a different Kubernetes namespace than the default, configure it using gcloud:

    gcloud config set run/namespace [NAMESPACE]
    
  10. After you create your cluster with Cloud Run on GKE enabled as described in the next section, set your default cluster and cluster location, and then configure kubectl for the cluster as follows:

    gcloud config set run/cluster [CLUSTER]
    gcloud config set run/cluster_location [CLUSTER_LOCATION]
    gcloud container clusters get-credentials [CLUSTER]
    

    Replace [CLUSTER] and [CLUSTER_LOCATION] with the name and location of the cluster you are using.

Creating a cluster with Cloud Run enabled

These instructions create a cluster with this configuration:

  • Cloud Run on GKE enabled
  • Kubernetes version 1.11.8-gke.4 (see recommended versions for other choices)
  • Nodes with 4 vCPU
  • Scopes to access cloud-platform, write to logging, write to monitoring

These are the minimum settings. Note that Kubernetes version 1.11.7-gke.6 (or later) is required.

You can use either the gcloud command line or the console to create a cluster. Click the appropriate tab for instructions.

Console

To create a cluster and enable it for Cloud Run on GKE:

  1. Go to the Google Kubernetes Engine page in the GCP Console:

    Go to Google Kubernetes Engine

  2. Click Create cluster to open the Create a Kubernetes cluster page.

  3. Select the Standard cluster template, and set the following values in the template:

    standard cluster template

    • Enter the name you want for your cluster.
    • Choose either Zonal or regional for the location type: either will work with Cloud Run on GKE. Zonal clusters are less expensive, but will incur downtime during master upgrades.
    • Select a zone or region for the cluster, depending on your choice in the previous step. Choose a zone or region close to you, or use us-central1-a.
    • From the dropdown list, select the Master version for your cluster. You must use version 1.11.8-gke.4 or newer. Other recommended versions are listed below.
    • Configure the node pool with these recommended settings:

      • Set Number of nodes to 3
      • For Machine type select 4 vCPUs.
    • In the Node pool form, click More node pool options to expand the form.

      • These instructions don't enable cluster autoscaling, so leave the checkbox Enable autoscaling unchecked. Note that even if you don't enable autoscaling of the cluster size, Cloud Run on GKE will always autoscale instances of your services within the cluster.

      • Under Security, change the Access scopes to Allow full access to all Cloud APIs:

        cluster security

      • Click Save.

    • Click Availability, networking, security, and additional features to expand the form, and scroll down to Stackdriver:

      Stackdriver features

      • Select Enable Stackdriver Logging service.
      • Select Enable Stackdriver Montoring service
      • Select Try the new Stackdriver beta Monitoring and Logging experience
      • Select the Enable Istio (beta) checkbox.
      • Set Enable mTLS to Permissive.
      • Select the checkbox Enable Cloud Run on GKE (beta)

      Note that you must select Stackdriver Monitoring and Logging in order to create the cluster.

  4. Click Create to create and provision the cluster with the configuration you just completed. It may take a few moments for this process to finish.

Command line

To create a new cluster that enables Cloud Run on GKE:

  1. Create a new cluster using the command:

    gcloud beta container clusters create [CLUSTER_NAME] \
      --addons=HorizontalPodAutoscaling,HttpLoadBalancing,Istio,CloudRun \
      --machine-type=n1-standard-4 \
      --cluster-version=[VERSION] --zone=[ZONE] \
      --enable-stackdriver-kubernetes --enable-ip-alias \
      --scopes cloud-platform
    

    Replace

    • [CLUSTER_NAME] with the name you want for your cluster.
    • [VERSION] with the latest version supported by Cloud Run
    • [ZONE] with the zone you are using for your cluster, for example, us-central1-a.

    Note that the cluster won't be created unless you use the parameter --enable-stackdriver-kubernetes as shown.

    Note that although these instructions don't enable cluster autoscaling to resize clusters for demand, Cloud Run on GKE automatically scales instances within the cluster.

  2. Wait for the cluster creation to complete. During the creation process, you should see messages similar to the following:

    Creating cluster my-cluster...done.
    Created [https://container.googleapis.com/v1beta1/ \
    projects/my-project/zones/us-central1-b/clusters/my-cluster].
    

    where my-project is your own project ID. You have just created a new Google Kubernetes Engine cluster named my-cluster in the project my-project

  3. Set gcloud defaults to use your new cluster and cluster location, to avoid having to specify these when you use the gcloud command line:

    gcloud config set run/cluster [CLUSTER_NAME]
    gcloud config set run/cluster_location us-central1-a
    

    Replace [CLUSTER_NAME] with the name you used for your cluster, and if necessary replace us-central1-a with the supported cluster location of your choice.

Enabling all outbound network access

By default all outbound traffic is blocked for the cluster (including access to Google APIs). To enable all outbound network access, for example to connect to GCP services such as Cloud Storage or external APIs, you need to set the correct scope of the proxy IP range by editing the config-network map. You'll need to use the gcloud command line (see setting up gcloud and kubectl command-line tools.

Determining the IP scope of your cluster

To set the correct scope, you need to determine the current IP ranges of your cluster. The scope varies depending on your cluster configuration.

  1. Invoke the command to determine the scope:

    gcloud container clusters describe [CLUSTER_NAME] \
      | grep -e clusterIpv4Cidr -e servicesIpv4Cidr
    

    Replace [CLUSTER_NAME] with your cluster name. Note that you must supply the cluster name even if you have set it as the default cluster for gcloud. Note also, that if you haven't set your default zone as shown in the prerequisites section, you must also supply the zone parameter after the cluster name: --zone=[ZONE], replacing [ZONE] with your cluster's zone.

  2. Note the IP ranges shown from the above command, similar to this:

    ...
    clusterIpv4Cidr: 10.8.0.0/14
    servicesIpv4Cidr: 10.11.240.0/20
    ...
    

    You must append these IP ranges together to enable all outbound access, as described in the next section.

Setting the IP scope

The istio.sidecar.includeOutboundIPRanges parameter in the config-network ConfigMap specifies the IP ranges that Istio sidecar intercepts. To allow outbound access, replace the default parameter value with the IP ranges of your cluster that you obtained in the previous steps:

  1. Run the following command to edit the config-network map:

    kubectl edit configmap config-network --namespace knative-serving
    
  2. Use an editor of your choice to change the istio.sidecar.includeOutboundIPRanges parameter value from * to the IP range you obtained in the previous steps. Separate multiple IP entries with a comma. For example:

     # Please edit the object below. Lines beginning with a '#' will be ignored,
     # and an empty file will abort the edit. If an error occurs while saving this file will be
     # reopened with the relevant failures.
     #
     apiVersion: v1
     data:
       istio.sidecar.includeOutboundIPRanges: '10.16.0.0/14,10.19.240.0/20'
     kind: ConfigMap
     metadata:
     ...
    

    When you set the parameter to a valid set of IP address ranges, Istio will no longer intercept traffic that is going to the IP addresses outside the provided ranges, and you don't need to specify any egress rules.

    If you omit the istio.sidecar.includeOutboundIPRanges parameter or set it to '', the value of the global.proxy.includeIPRanges parameter provided at Istio deployment time will be used: this value is *.

    Note that if an invalid value is used, '' will be used instead.

  3. Save your changes. Note that any change is automatically picked up and used for all deployed revisions.

Troubleshooting outbound networking issues

If you experience trouble making calls outside your cluster, verify that the policy was applied to the pod running your service by checking the metadata on the pod. Verify that the traffic.sidecar.istio.io/includeOutboundIPRanges annotation matches the expected value from the config-map:

  1. Make sure there is a pod running, because pods can scale to zero:

    curl -H "Host: helloworld-go.default.example.com" http://35.203.155. 229
    

    Replace the host URL and IP address with your own URL and the cluster's IP address. If you don't know how to locate the cluster's IP address, see the instructions in Accessing your deployed service.

  2. Within 5 minutes, invoke this command to get the list of available pods:

    kubectl get pods
    
  3. From the output of the get pods command, locate the pod associated with your service: it will start with the name of your service.

  4. Use that pod name in the following command to retrieve the metadata and see the labels applied.

    kubectl get pod [POD_NAME] --output yaml
    

    Replace [POD_NAME] with your pod name. See the pod documentation for more information on pods.

    You should see a result similar to this:

    apiVersion: v1
    kind: Pod
    metadata:
      annotations:
        serving.knative.dev/configurationGeneration: "2"
        sidecar.istio.io/inject: "true"
        ...
        traffic.sidecar.istio.io/includeOutboundIPRanges: 10.16.0.0/14,10.19.240.0/20
    ...
    

    The line starting with traffic.sidecar.istio.io/includeOutboundIPRanges: 10.16.0.0/14,10.19.240.0/20 has the most important information.

Setting up a custom domain

If you want to use custom domains, see Mapping custom domains.

Disabling Cloud Run on GKE

During the beta, Cloud Run on GKE cannot be disabled after you create a cluster with it enabled. You must delete the cluster to stop the Cloud Run on GKE components from running. Note that this permanently deletes workloads in the cluster and all other cluster states.

You can use the console UI or the gcloud command line to delete clusters: select the tab for instructions.

Console

To delete the cluster:

  1. Go to the Google Kubernetes Engine page in the GCP Console:

    Go to Google Kubernetes Engine

  2. Select the cluster you want to delete.

  3. Click Delete.

Command line

To delete a cluster:

  1. Invoke the following command:

    gcloud beta container clusters delete [CLUSTER_NAME]
    

    Replace [CLUSTER_NAME] with the name of the cluster you are deleting.

  2. When prompted to confirm the cluster deletion, respond y.

  3. Wait for the deletion to finish. You should see messages similar to the following:

    Deleting cluster my-cluster...done.
    Deleted [https://container.googleapis.com/v1beta1/projects/my-project-1234/zones/us-central1-b/clusters/serverless-cluster].
    

What's next

Was this page helpful? Let us know how we did:

Send feedback about...