Install Kf

This document describes how to set up a GKE cluster, then install Kf and its dependencies.

Before you begin

Overview

  • Your GKE cluster must meet the following requirements:

    • Optional, but recommended, the cluster should be dedicated to Kf. We recommend that you only install Kf and its dependencies to ensure that the compatibility matrix is maintained.

    • At least four nodes. If you need to add nodes, see Resizing a cluster.

    • The minimum machine type that has at least four vCPUs, such as e2-standard-4. If the machine type for your cluster doesn't have at least four vCPUs, change the machine type as described in Migrating workloads to different machine types.

    • Optional, but recommended, enroll the cluster in a release channel. Follow the instructions in Enrolling an existing cluster in a release channel if you have a static GKE version.

    • Workload Identity enabled.

    • Artifact Registry enabled.

    • Anthos Service Mesh (ASM).

    • Tekton installed. See the Dependency matrix for the version.

    • A Google Service Account with the following IAM policy (creation instructions linked below):

      • roles/iam.serviceAccountAdmin
      • serviceAccount:${CLUSTER_PROJECT}.svc.id.goog[kf/controller] (for member serviceAccount:${CLUSTER_PROJECT}.svc.id.goog[kf/controller])

Enable support for Compute Engine

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  3. Make sure that billing is enabled for your Google Cloud project.

  4. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  5. Make sure that billing is enabled for your Google Cloud project.

  6. Enable the Compute Engine API.

    Enable the API

Enable and configure GKE

Before you start, make sure you have performed the following tasks:

  • Enable the Google Kubernetes Engine API.
  • Enable Google Kubernetes Engine API
  • If you want to use the Google Cloud CLI for this task, install and then initialize the gcloud CLI. If you previously installed the gcloud CLI, get the latest version by running gcloud components update.

Create and prepare a new GKE cluster

Setup environment variables

Linux

export PROJECT_ID=YOUR_PROJECT_ID
export CLUSTER_PROJECT_ID=YOUR_PROJECT_ID
export CLUSTER_NAME=kf-cluster
export COMPUTE_ZONE=us-central1-a
export COMPUTE_REGION=us-central1
export CLUSTER_LOCATION=${COMPUTE_ZONE}
export NODE_COUNT=4
export MACHINE_TYPE=e2-standard-4
export NETWORK=default
export KF_VERSION=v2.1.0
export TEKTON_VERSION=v0.19.0

Windows Powershell

Set-Variable -Name PROJECT_ID -Value YOUR_PROJECT_ID
Set-Variable -Name CLUSTER_PROJECT_ID -Value YOUR_PROJECT_ID
Set-Variable -Name CLUSTER_NAME -Value kf-cluster
Set-Variable -Name COMPUTE_ZONE -Value us-central1-a
Set-Variable -Name COMPUTE_REGION -Value us-central1
Set-Variable -Name CLUSTER_LOCATION -Value $COMPUTE_ZONE
Set-Variable -Name NODE_COUNT -Value 4
Set-Variable -Name MACHINE_TYPE -Value e2-standard-4
Set-Variable -Name NETWORK -Value default
Set-Variable -Name KF_VERSION -Value v2.1.0
Set-Variable -Name TEKTON_VERSION -Value v0.19.0

Service account setup

Create a GCP service account (GSA) that will be associated with a Kubernetes Service Account via Workload Identity. This prevents the need to create and inject a service account key.

  1. Create the service account that Kf will use.

    gcloud iam service-accounts create ${CLUSTER_NAME}-sa \
      --project=${CLUSTER_PROJECT_ID} \
      --description="GSA for Kf ${CLUSTER_NAME}" \
      --display-name="${CLUSTER_NAME}"
  2. Allow the service account to modify its own policy. The Kf controller will use this to add new (name)spaces to the policy, allowing reuse for Workload Identity.

    gcloud iam service-accounts add-iam-policy-binding ${CLUSTER_NAME}-sa@${CLUSTER_PROJECT_ID}.iam.gserviceaccount.com \
      --project=${CLUSTER_PROJECT_ID} \
      --role="roles/iam.serviceAccountAdmin" \
      --member="serviceAccount:${CLUSTER_NAME}-sa@${CLUSTER_PROJECT_ID}.iam.gserviceaccount.com"
  3. Give monitoring metrics role for write access to Cloud Monitoring.

    gcloud projects add-iam-policy-binding ${CLUSTER_PROJECT_ID} \
      --member="serviceAccount:${CLUSTER_NAME}-sa@${CLUSTER_PROJECT_ID}.iam.gserviceaccount.com" \
      --role="roles/monitoring.metricWriter"
  4. Give logging role for write access to Cloud Logging.

    gcloud projects add-iam-policy-binding ${CLUSTER_PROJECT_ID} \
      --member="serviceAccount:${CLUSTER_NAME}-sa@${CLUSTER_PROJECT_ID}.iam.gserviceaccount.com" \
      --role="roles/logging.logWriter"

Create GKE cluster

gcloud container clusters create ${CLUSTER_NAME} \
  --project=${CLUSTER_PROJECT_ID} \
  --zone=${CLUSTER_LOCATION} \
  --num-nodes=${NODE_COUNT} \
  --machine-type=${MACHINE_TYPE} \
  --network=${NETWORK} \
  --addons=HttpLoadBalancing,HorizontalPodAutoscaling,NetworkPolicy \
  --enable-stackdriver-kubernetes \
  --enable-ip-alias \
  --enable-network-policy \
  --enable-autorepair \
  --enable-autoupgrade \
  --scopes=https://www.googleapis.com/auth/cloud-platform \
  --release-channel=regular \
  --workload-pool="${CLUSTER_PROJECT_ID}.svc.id.goog" \
  --service-account="${CLUSTER_NAME}-sa@${CLUSTER_PROJECT_ID}.iam.gserviceaccount.com"

Set firewall rules

Kf requires some firewall ports to be open. The master node needs to be able to communicate with pods on ports 80, 443, 8080, 8443 and 6443.

Enable Workload Identity

Now that you have a service account and GKE cluster, associate the cluster's identity namespace with the cluster.

gcloud iam service-accounts add-iam-policy-binding \
  "${CLUSTER_NAME}-sa@${CLUSTER_PROJECT_ID}.iam.gserviceaccount.com" \
  --project=${CLUSTER_PROJECT_ID} \
  --role="roles/iam.workloadIdentityUser" \
  --member="serviceAccount:${CLUSTER_PROJECT_ID}.svc.id.goog[kf/controller]"

Target GKE cluster

Configure kubectl command line access by running the following command.

gcloud container clusters get-credentials ${CLUSTER_NAME} \
    --project=${CLUSTER_PROJECT_ID} \
    --zone=${CLUSTER_LOCATION}

Create an Artifact Registry repository

  1. Create an Artifact Registry for container images to be stored.

    gcloud artifacts repositories create ${CLUSTER_NAME} \
      --repository-format=docker \
      --location=${COMPUTE_REGION}
  2. Grant the service account permission on the Artifact Registry repository.

    gcloud artifacts repositories add-iam-policy-binding ${CLUSTER_NAME} \
      --location=${COMPUTE_REGION} \
      --member="serviceAccount:${CLUSTER_NAME}-sa@${CLUSTER_PROJECT_ID}.iam.gserviceaccount.com" \
      --role='roles/artifactregistry.writer'
  3. Configure your local authentication.

    gcloud auth configure-docker ${COMPUTE_REGION}-docker.pkg.dev

Install software dependencies on cluster

  1. Install Service Mesh.

  2. Install Tekton:

    kubectl apply -f "https://github.com/tektoncd/pipeline/releases/download/${TEKTON_VERSION}/release.yaml"

Install Kf

  1. See Create and prepare a GKE cluster for Kf to create a cluster prepared to run Kf.

  2. Select and note the desired Kf release. Reference the Kf Downloads page for available versions

  3. Install CLI:

    Linux

    This will install kf for all users on the system. Follow the instructions in the Cloud Shell tab to install it just for yourself.

    gsutil cp gs://kf-releases/${KF_VERSION}/kf-linux /tmp/kf
    chmod a+x /tmp/kf
    sudo mv /tmp/kf /usr/local/bin/kf
    

    Mac

    This will install kf for all users on the system.

    gsutil cp gs://kf-releases/${KF_VERSION}/kf-darwin /tmp/kf
    chmod a+x /tmp/kf
    sudo mv /tmp/kf /usr/local/bin/kf
    

    Cloud Shell

    This will install kf on your Cloud Shell instance if you use bash, the instructions may need to be modified for other shells.

    mkdir -p ~/bin
    gsutil cp gs://kf-releases/${KF_VERSION}/kf-linux ~/bin/kf
    chmod a+x ~/bin/kf
    echo "export PATH=$HOME/bin:$PATH" >> ~/.bashrc
    source ~/.bashrc
    

    Windows

    This will download kf to current directory. Add it to the path if you want to call if from anywhere other than the current directory.

    gsutil cp gs://kf-releases/${KF_VERSION}/kf-windows.exe kf.exe
    
  4. Install server components:

    Linux and Mac

    This will download kf.yaml to current directory.

    gsutil cp gs://kf-releases/${KF_VERSION}/kf.yaml /tmp/kf.yaml
    kubectl apply -f /tmp/kf.yaml
    

    Windows

    This will download kf.yaml to current directory.

    gsutil cp gs://kf-releases/${KF_VERSION}/kf.yaml kf.yaml
    kubectl apply -f kf.yaml
    
  5. Setup secrets:

    export WI_ANNOTATION=iam.gke.io/gcp-service-account=${CLUSTER_NAME}-sa@${CLUSTER_PROJECT_ID}.iam.gserviceaccount.com
    
    kubectl annotate serviceaccount controller ${WI_ANNOTATION} \
    --namespace kf \
    --overwrite
    
    echo "{\"apiVersion\":\"v1\",\"kind\":\"ConfigMap\",\"metadata\":{\"name\":\"config-secrets\", \"namespace\":\"kf\"},\"data\":{\"wi.googleServiceAccount\":\"${CLUSTER_NAME}-sa@${CLUSTER_PROJECT_ID}.iam.gserviceaccount.com\"}}" | kubectl apply -f -
  6. Setup Kf defaults, these values can be changed later. The example below uses domain templates with a wildcard DNS provider to provide each Space its own domain name:

    export CONTAINER_REGISTRY=${COMPUTE_REGION}-docker.pkg.dev/${CLUSTER_PROJECT_ID}/${CLUSTER_NAME}
    export DOMAIN='$(SPACE_NAME).$(CLUSTER_INGRESS_IP).nip.io'
    
    kubectl patch configmaps config-defaults \
    -n=kf \
    -p="{\"data\":{\"spaceContainerRegistry\":\"${CONTAINER_REGISTRY}\",\"spaceClusterDomains\":\"- domain: ${DOMAIN}\"}}"
  7. Validate installation:

    kf doctor --retries 10