Setting up Knative serving

Learn how to setup and configure your installation of Knative serving.

Before you begin

You must have Knative serving installed on your GKE cluster. See the installation guide for details about GKE cluster prerequisites and how to install Knative serving.

Setting up authentication with Workload Identity

You can use Workload Identity to authenticate your Knative serving services to Google Cloud APIs and services. You must set up Workload Identity before you deploy services to your cluster, otherwise each service that exists on your cluster prior to enabling Workload Identity needs to be migrated. Learn more about using Workload Identity.

Enabling metrics with Workload Identity

To enable metrics, like reporting request count or request latency to Google Cloud Observability, you need to manually set write permissions for Cloud Monitoring. For details, see Enabling metrics with Workload Identity.

Configuring HTTPS and custom domains

To enable HTTPS and set a custom domain, see the following pages:

Setting up Anthos Service Mesh

To configure Anthos Service Mesh options for Knative serving, see the In-cluster control plane options, including how to set up a private, internal network.

Setting up a private, internal network

Deploying services on an internal network is useful for enterprises that provide internal apps to their staff, and for services that are used by clients that run outside the Knative serving cluster. This configuration allows other resources in your network to communicate with the service using a private, internal (RFC 1918) IP address that can't be accessed by the public.

To create your internal network, you configure Anthos Service Mesh to use Internal TCP/UDP Load Balancing instead of a public, external network load balancer. You can then deploy your Knative serving services on an internal IP address within your VPC network.

Before you begin

To set up the internal load balancer:

  1. Enable the internal load balancer feature in Anthos Service Mesh.

    The internal load balancer is an optional feature that you can configure during the installation of Anthos Service Mesh, or by updating your existing installation.

    Follow the steps in Enabling optional features on the in-cluster control plane and make sure to include the --option internal-load-balancer script option.

    When you specify the --option internal-load-balancer option, the script automatically fetches the Enable an internal load balancer custom resource from GitHub. If you need to modify the custom resource, follow the instructions for using the --custom_overlay option instead.

  2. Run the following command to watch updates to your GKE cluster:

    kubectl -n INGRESS_NAMESPACE get svc istio-ingressgateway --watch
    

    Replace INGRESS_NAMESPACE with the namespace of your Anthos Service Mesh ingress service. Specify istio-system if you installed Anthos Service Mesh using its default configuration.

    1. Note the annotation cloud.google.com/load-balancer-type: Internal.
    2. Look for the value of IP in the Ingress load balancer to change to a private IP address.
    3. Press Ctrl+C to stop the updates once you see a private IP address in the IP field.
  3. For private clusters on Google Cloud, you must open ports. For details, see opening ports on your private cluster in the Anthos Service Mesh documentation.

To verify internal connectivity after your changes:

  1. Deploy a service called sample to Knative serving in the default namespace:

    gcloud run deploy sample \
    --image gcr.io/knative-samples/helloworld \
    --namespace default
    --platform gke
    
  2. Create a Compute Engine virtual machine (VM) in the same zone as the GKE cluster:

    VM=cloudrun-gke-ilb-tutorial-vm
    
    gcloud compute instances create $VM
    
  3. Store the private IP address of the Istio Ingress Gateway in an environment variable called EXTERNAL_IP and a file called external-ip.txt:

    export EXTERNAL_IP=$(kubectl -n INGRESS_NAMESPACE get svc istio-ingressgateway \
        -o jsonpath='{.status.loadBalancer.ingress[0].ip}' | tee external-ip.txt)
    

    Replace INGRESS_NAMESPACE with the namespace of your Anthos Service Mesh ingress service. Specify istio-system if you installed Anthos Service Mesh using its default configuration.

  4. Copy the file containing the IP address to the VM:

    gcloud compute scp external-ip.txt $VM:~
    
  5. Connect to the VM using SSH:

    gcloud compute ssh $VM
    
  6. While in the SSH session, test the sample service:

    curl -s -w'\n' -H Host:sample.default.nip.io $(cat external-ip.txt)
    

    The output is as follows:

    Hello World!
    
  7. Leave the SSH session:

    exit
    

Setting up a multi-tenant environment

In multi-tenant use cases, you'll need to manage and deploy Knative serving services to a Google Kubernetes Engine cluster that is outside your current project. For more information about GKE multi-tenancy, see Cluster multi-tenancy.

To learn how to configure multi-tenancy for Knative serving, see Cross-project multi-tenancy.

What's next