This guide shows how to set up a Google Kubernetes Engine cluster with Cloud Run on GKE enabled. Because you can use either the GCP Console or the gcloud command line, the instructions cover both of these.
Sign in to your Google Account.
If you don't already have one, sign up for a new account.
Select or create a Google Cloud Platform project.
Make sure that billing is enabled for your Google Cloud Platform project.
Setting up gcloud
Although you can use either the GCP Console console or the gcloud command line to use Cloud Run on GKE, you must use the gcloud command line for certain tasks, such as setting up outbound networking.
To set up the gcloud command line for Cloud Run on GKE:
You should set your default project setting for
gcloudto the one you just created:
gcloud config set project [PROJECT-ID]
[PROJECT-ID]with the project ID of the project you created.
ZONEto the desired zone for the new cluster. You can use any zone where GKE is supported, for example:
gcloud config set compute/zone us-central1-a
Enable the following APIs for the project, which are needed to create a cluster, build and publish a container into the Google Container registry:
gcloud services enable container.googleapis.com containerregistry.googleapis.com cloudbuild.googleapis.com
Update installed gcloud components:
gcloud components update
Install the gcloud beta components:
gcloud components install beta
gcloud components install kubectl
Kubernetes clusters come with a namespace named
default. If you need to create a new namespace, run:
kubectl create namespace [NAMESPACE]
[NAMESPACE]with the Namespace you want to create.
If you choose to use a different Kubernetes namespace than the
default, configure it using
gcloud config set run/namespace [NAMESPACE]
After you create your cluster with Cloud Run on GKE enabled as described in the next section, set your default cluster and cluster location, and then configure kubectl for the cluster as follows:
gcloud config set run/cluster [CLUSTER] gcloud config set run/cluster_location [CLUSTER_LOCATION] gcloud container clusters get-credentials [CLUSTER]
[CLUSTER_LOCATION]with the name and location of the cluster you are using.
Creating a cluster with Cloud Run enabled
These instructions create a cluster with this configuration:
- Cloud Run on GKE enabled
- Kubernetes version 1.11.8-gke.4 (see recommended versions for other choices)
- Nodes with 4 vCPU
- Scopes to access cloud-platform, write to logging, write to monitoring
These are the minimum settings. Note that Kubernetes version 1.11.7-gke.6 (or later) is required.
You can use either the gcloud command line or the console to create a cluster. Click the appropriate tab for instructions.
To create a cluster and enable it for Cloud Run on GKE:
Go to the Google Kubernetes Engine page in the GCP Console:
Click Create cluster to open the Create a Kubernetes cluster page.
Select the Standard cluster template, and set the following values in the template:
- Enter the name you want for your cluster.
- Choose either Zonal or regional for the location type: either will work with Cloud Run on GKE. Zonal clusters are less expensive, but will incur downtime during master upgrades.
- Select a zone or region for the cluster, depending on your choice in
the previous step. Choose a zone or region close to you, or use
- From the dropdown list, select the Master version for your cluster. You must use version 1.11.8-gke.4 or newer. Other recommended versions are listed below.
Configure the node pool with these recommended settings:
- Set Number of nodes to 3
- For Machine type select 4 vCPUs.
In the Node pool form, click More node pool options to expand the form.
These instructions don't enable cluster autoscaling, so leave the checkbox Enable autoscaling unchecked. Note that even if you don't enable autoscaling of the cluster size, Cloud Run on GKE will always autoscale instances of your services within the cluster.
Under Security, change the Access scopes to Allow full access to all Cloud APIs:
Click Availability, networking, security, and additional features to expand the form, and scroll down to Stackdriver:
- Select Enable Stackdriver Logging service.
- Select Enable Stackdriver Montoring service
- Select Try the new Stackdriver beta Monitoring and Logging experience
- Select the Enable Istio (beta) checkbox.
- Set Enable mTLS to Permissive.
- Select the checkbox Enable Cloud Run on GKE (beta)
Note that you must select Stackdriver Monitoring and Logging in order to create the cluster.
Click Create to create and provision the cluster with the configuration you just completed. It may take a few moments for this process to finish.
To create a new cluster that enables Cloud Run on GKE:
Create a new cluster using the command:
gcloud beta container clusters create [CLUSTER_NAME] \ --addons=HorizontalPodAutoscaling,HttpLoadBalancing,Istio,CloudRun \ --machine-type=n1-standard-4 \ --cluster-version=[VERSION] --zone=[ZONE] \ --enable-stackdriver-kubernetes --enable-ip-alias \ --scopes cloud-platform
[CLUSTER_NAME]with the name you want for your cluster.
[VERSION]with the latest version supported by Cloud Run
[ZONE]with the zone you are using for your cluster, for example,
Note that the cluster won't be created unless you use the parameter
Note that although these instructions don't enable cluster autoscaling to resize clusters for demand, Cloud Run on GKE automatically scales instances within the cluster.
Wait for the cluster creation to complete. During the creation process, you should see messages similar to the following:
Creating cluster my-cluster...done. Created [https://container.googleapis.com/v1beta1/ \ projects/my-project/zones/us-central1-b/clusters/my-cluster].
my-projectis your own project ID. You have just created a new Google Kubernetes Engine cluster named
my-clusterin the project
Set gcloud defaults to use your new cluster and cluster location, to avoid having to specify these when you use the gcloud command line:
gcloud config set run/cluster [CLUSTER_NAME] gcloud config set run/cluster_location us-central1-a
[CLUSTER_NAME]with the name you used for your cluster, and if necessary replace
us-central1-awith the supported cluster location of your choice.
Enabling all outbound network access
By default all outbound traffic is blocked for the cluster (including access to
Google APIs). To enable all outbound network access, for example to connect to
GCP services such as Cloud Storage or external APIs, you need to set
the correct scope of the proxy IP range by editing the
You'll need to use the gcloud command line (see setting up
kubectl command-line tools.
Determining the IP scope of your cluster
To set the correct scope, you need to determine the current IP ranges of your cluster. The scope varies depending on your cluster configuration.
Invoke the command to determine the scope:
gcloud container clusters describe [CLUSTER_NAME] \ | grep -e clusterIpv4Cidr -e servicesIpv4Cidr
[CLUSTER_NAME]with your cluster name. Note that you must supply the cluster name even if you have set it as the default cluster for gcloud. Note also, that if you haven't set your default zone as shown in the prerequisites section, you must also supply the
zoneparameter after the cluster name:
[ZONE]with your cluster's zone.
Note the IP ranges shown from the above command, similar to this:
... clusterIpv4Cidr: 10.8.0.0/14 servicesIpv4Cidr: 10.11.240.0/20 ...
You must append these IP ranges together to enable all outbound access, as described in the next section.
Setting the IP scope
istio.sidecar.includeOutboundIPRanges parameter in the
ConfigMap specifies the IP ranges that Istio sidecar intercepts. To allow
outbound access, replace the default parameter value with the IP ranges of your
cluster that you obtained in the previous steps:
Run the following command to edit the
kubectl edit configmap config-network --namespace knative-serving
Use an editor of your choice to change the
istio.sidecar.includeOutboundIPRangesparameter value from * to the IP range you obtained in the previous steps. Separate multiple IP entries with a comma. For example:
# Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # apiVersion: v1 data: istio.sidecar.includeOutboundIPRanges: '10.16.0.0/14,10.19.240.0/20' kind: ConfigMap metadata: ...
When you set the parameter to a valid set of IP address ranges, Istio will no longer intercept traffic that is going to the IP addresses outside the provided ranges, and you don't need to specify any egress rules.
If you omit the
istio.sidecar.includeOutboundIPRangesparameter or set it to '', the value of the
global.proxy.includeIPRangesparameter provided at Istio deployment time will be used: this value is
Note that if an invalid value is used, '' will be used instead.
Save your changes. Note that any change is automatically picked up and used for all deployed revisions.
Troubleshooting outbound networking issues
If you experience trouble making calls outside your cluster, verify that the
policy was applied to the pod running your service by checking the metadata on
the pod. Verify that the
annotation matches the expected value from the
Make sure there is a pod running, because pods can scale to zero:
curl -H "Host: helloworld-go.default.example.com" http://35.203.155. 229
Replace the host URL and IP address with your own URL and the cluster's IP address. If you don't know how to locate the cluster's IP address, see the instructions in Accessing your deployed service.
Within 5 minutes, invoke this command to get the list of available pods:
kubectl get pods
From the output of the
get podscommand, locate the pod associated with your service: it will start with the name of your service.
Use that pod name in the following command to retrieve the metadata and see the labels applied.
kubectl get pod [POD_NAME] --output yaml
[POD_NAME]with your pod name. See the pod documentation for more information on pods.
You should see a result similar to this:
apiVersion: v1 kind: Pod metadata: annotations: serving.knative.dev/configurationGeneration: "2" sidecar.istio.io/inject: "true" ... traffic.sidecar.istio.io/includeOutboundIPRanges: 10.16.0.0/14,10.19.240.0/20 ...
The line starting with
traffic.sidecar.istio.io/includeOutboundIPRanges: 10.16.0.0/14,10.19.240.0/20has the most important information.
Setting up a custom domain
If you want to use custom domains, see Mapping custom domains.
Disabling Cloud Run on GKE
During the beta, Cloud Run on GKE cannot be disabled after you create a cluster with it enabled. You must delete the cluster to stop the Cloud Run on GKE components from running. Note that this permanently deletes workloads in the cluster and all other cluster states.
You can use the console UI or the gcloud command line to delete clusters: select the tab for instructions.
To delete the cluster:
Go to the Google Kubernetes Engine page in the GCP Console:
Select the cluster you want to delete.
To delete a cluster:
Invoke the following command:
gcloud beta container clusters delete [CLUSTER_NAME]
[CLUSTER_NAME]with the name of the cluster you are deleting.
When prompted to confirm the cluster deletion, respond
Wait for the deletion to finish. You should see messages similar to the following:
Deleting cluster my-cluster...done. Deleted [https://container.googleapis.com/v1beta1/projects/my-project-1234/zones/us-central1-b/clusters/serverless-cluster].