Install Kf on Google Cloud

This document describes how to set up a GKE cluster, and then install Kf and its dependencies.

Before you begin

GKE cluster requirements

Kf requirements

Review and understand the access permissions of components in Kf in the Kf dependencies and architecture page.

The Dependency matrix lists the specific versions.

Enable support for Compute Engine

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  3. Make sure that billing is enabled for your Google Cloud project.

  4. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  5. Make sure that billing is enabled for your Google Cloud project.

  6. Enable the Compute Engine API.

    Enable the API

Enable support for Artifact Registry

  1. Enable the Artifact Registry API.

    Enable the Artifact Registry API

Enable and configure GKE

Before you start, make sure you have performed the following tasks:

  • Enable the Google Kubernetes Engine API.
  • Enable Google Kubernetes Engine API
  • If you want to use the Google Cloud CLI for this task, install and then initialize the gcloud CLI. If you previously installed the gcloud CLI, get the latest version by running gcloud components update.

Set up environment variables

Linux and Mac

export PROJECT_ID=YOUR_PROJECT_ID
export CLUSTER_PROJECT_ID=YOUR_PROJECT_ID
export CLUSTER_NAME=kf-cluster
export COMPUTE_ZONE=us-central1-a
export COMPUTE_REGION=us-central1
export CLUSTER_LOCATION=${COMPUTE_ZONE} # Replace ZONE with REGION to switch
export NODE_COUNT=4
export MACHINE_TYPE=e2-standard-4
export NETWORK=default

Windows PowerShell

Set-Variable -Name PROJECT_ID -Value YOUR_PROJECT_ID
Set-Variable -Name CLUSTER_PROJECT_ID -Value YOUR_PROJECT_ID
Set-Variable -Name CLUSTER_NAME -Value kf-cluster
Set-Variable -Name COMPUTE_ZONE -Value us-central1-a
Set-Variable -Name COMPUTE_REGION -Value us-central1
Set-Variable -Name CLUSTER_LOCATION -Value $COMPUTE_ZONE # Replace ZONE with REGION to switch
Set-Variable -Name NODE_COUNT -Value 4
Set-Variable -Name MACHINE_TYPE -Value e2-standard-4
Set-Variable -Name NETWORK -Value default

Set up service account

Create a Google Cloud service account that will be associated with a Kubernetes Service Account via Workload Identity. This prevents the need to create and inject a service account key.

  1. Create the service account that Kf will use.

    gcloud iam service-accounts create ${CLUSTER_NAME}-sa \
    --project=${CLUSTER_PROJECT_ID} \
    --description="GSA for Kf ${CLUSTER_NAME}" \
    --display-name="${CLUSTER_NAME}"
  2. Create a new custom IAM role.

    gcloud iam roles create serviceAccountUpdater \
    --project=${CLUSTER_PROJECT_ID} \
    --title "Service Account Updater" \
    --description "This role only updates members on a GSA" \
    --permissions iam.serviceAccounts.get,iam.serviceAccounts.getIamPolicy,iam.serviceAccounts.list,iam.serviceAccounts.setIamPolicy
  3. Allow the service account to modify its own policy. The Kf controller will use this to add new (name)spaces to the policy, allowing reuse for Workload Identity.

    gcloud projects add-iam-policy-binding ${CLUSTER_PROJECT_ID} \
      --member="serviceAccount:${CLUSTER_NAME}-sa@${CLUSTER_PROJECT_ID}.iam.gserviceaccount.com" \
      --role="projects/${CLUSTER_PROJECT_ID}/roles/serviceAccountUpdater"
  4. Give monitoring metrics role for write access to Cloud Monitoring.

    gcloud projects add-iam-policy-binding ${CLUSTER_PROJECT_ID} \
      --member="serviceAccount:${CLUSTER_NAME}-sa@${CLUSTER_PROJECT_ID}.iam.gserviceaccount.com" \
      --role="roles/monitoring.metricWriter"
  5. Give logging role for write access to Cloud Logging.

    gcloud projects add-iam-policy-binding ${CLUSTER_PROJECT_ID} \
      --member="serviceAccount:${CLUSTER_NAME}-sa@${CLUSTER_PROJECT_ID}.iam.gserviceaccount.com" \
      --role="roles/logging.logWriter"

Create GKE cluster

gcloud container clusters create ${CLUSTER_NAME} \
  --project=${CLUSTER_PROJECT_ID} \
  --zone=${CLUSTER_LOCATION} \
  --num-nodes=${NODE_COUNT} \
  --machine-type=${MACHINE_TYPE} \
  --disk-size "122" \
  --network=${NETWORK} \
  --addons HorizontalPodAutoscaling,HttpLoadBalancing,GcePersistentDiskCsiDriver \
  --enable-dataplane-v2 \
  --enable-stackdriver-kubernetes \
  --enable-ip-alias \
  --enable-autorepair \
  --enable-autoupgrade \
  --scopes cloud-platform \
  --release-channel=regular \
  --workload-pool="${CLUSTER_PROJECT_ID}.svc.id.goog" \
  --service-account="${CLUSTER_NAME}-sa@${CLUSTER_PROJECT_ID}.iam.gserviceaccount.com"

Set firewall rules

Kf requires some firewall ports to be open. The master node needs to be able to communicate with pods on ports 80, 443, 8080, 8443 and 6443.

Enable Workload Identity

Now that you have a service account and GKE cluster, associate the cluster's identity namespace with the cluster.

gcloud iam service-accounts add-iam-policy-binding \
  --project=${CLUSTER_PROJECT_ID} \
  --role roles/iam.workloadIdentityUser \
  --member "serviceAccount:${CLUSTER_PROJECT_ID}.svc.id.goog[kf/controller]" \
  "${CLUSTER_NAME}-sa@${CLUSTER_PROJECT_ID}.iam.gserviceaccount.com"

gcloud iam service-accounts add-iam-policy-binding \
  --project=${CLUSTER_PROJECT_ID} \
  --role roles/iam.workloadIdentityUser \
  --member "serviceAccount:${CLUSTER_PROJECT_ID}.svc.id.goog[cnrm-system/cnrm-controller-manager]" \
  "${CLUSTER_NAME}-sa@${CLUSTER_PROJECT_ID}.iam.gserviceaccount.com"

Target GKE cluster

Configure kubectl command line access by running the following command.

gcloud container clusters get-credentials ${CLUSTER_NAME} \
    --project=${CLUSTER_PROJECT_ID} \
    --zone=${CLUSTER_LOCATION}

Create an Artifact Registry repository

  1. Create an Artifact Registry for container images to be stored.

    gcloud artifacts repositories create ${CLUSTER_NAME} \
      --project=${CLUSTER_PROJECT_ID} \
      --repository-format=docker \
      --location=${COMPUTE_REGION}
  2. Grant the service account permission on the Artifact Registry repository.

    gcloud artifacts repositories add-iam-policy-binding ${CLUSTER_NAME} \
      --project=${CLUSTER_PROJECT_ID} \
      --location=${COMPUTE_REGION} \
      --member="serviceAccount:${CLUSTER_NAME}-sa@${CLUSTER_PROJECT_ID}.iam.gserviceaccount.com" \
      --role='roles/artifactregistry.writer'

Install software dependencies on a cluster

  1. Install Cloud Service Mesh v1.23.3-asm.1+config1.

    1. Follow the Cloud Service Mesh install guide, including steps to create an ingress gateway.
  2. Install Config Connector.

    1. Download the required Config Connector Operator tar file.

    2. Extract the tar file.

      tar zxvf release-bundle.tar.gz
      
    3. Install the Config Connector operator on your cluster.

      kubectl apply -f operator-system/configconnector-operator.yaml
      
    4. Configure the Config Connector operator.

      1. Copy the following YAML into a file named configconnector.yaml:

        # configconnector.yaml
        apiVersion: core.cnrm.cloud.google.com/v1beta1
        kind: ConfigConnector
        metadata:
          # the name is restricted to ensure that there is only one
          # ConfigConnector resource installed in your cluster
          name: configconnector.core.cnrm.cloud.google.com
        spec:
          mode: cluster
          googleServiceAccount: "KF_SERVICE_ACCOUNT_NAME" # Replace with the full service account resolved from ${CLUSTER_NAME}-sa@${CLUSTER_PROJECT_ID}.iam.gserviceaccount.com
        
      2. Apply the configuration to your cluster.

        kubectl apply -f configconnector.yaml
    5. Verify Config Connector is fully installed before continuing.

      • Config Connector runs all of its components in a namespace named cnrm-system. Verify the Pods are ready by running the following command:

        kubectl wait -n cnrm-system --for=condition=Ready pod --all
      • If Config Connector is installed correctly, expect to see output similar to the following:

        pod/cnrm-controller-manager-0 condition met
        pod/cnrm-deletiondefender-0 condition met
        pod/cnrm-resource-stats-recorder-86858dcdc5-6lqzb condition met
        pod/cnrm-webhook-manager-58c799b8fb-kcznq condition met
        pod/cnrm-webhook-manager-58c799b8fb-n2zpx condition met
    6. Setup Workload Identity.

      kubectl annotate serviceaccount \
      --namespace cnrm-system \
      --overwrite \
      cnrm-controller-manager \
      iam.gke.io/gcp-service-account=${CLUSTER_NAME}-sa@${CLUSTER_PROJECT_ID}.iam.gserviceaccount.com
  3. Install Tekton:

    kubectl apply -f "https://storage.googleapis.com/tekton-releases/pipeline/previous/v0.65.1/release.yaml"

Install Kf

  1. Install the Kf CLI:

    Linux

    This command installs the Kf CLI for all users on the system. Follow the instructions in the Cloud Shell tab to install it just for yourself.

    gcloud storage cp gs://kf-releases/v2.11.27/kf-linux /tmp/kf
    chmod a+x /tmp/kf
    sudo mv /tmp/kf /usr/local/bin/kf
    

    Mac

    This command installs kf for all users on the system.

    gcloud storage cp gs://kf-releases/v2.11.27/kf-darwin /tmp/kf
    chmod a+x /tmp/kf
    sudo mv /tmp/kf /usr/local/bin/kf
    

    Cloud Shell

    This command installs kf on your Cloud Shell instance if you use bash, the instructions may need to be modified for other shells.

    mkdir -p ~/bin
    gcloud storage cp gs://kf-releases/v2.11.27/kf-linux ~/bin/kf
    chmod a+x ~/bin/kf
    echo "export PATH=$HOME/bin:$PATH" >> ~/.bashrc
    source ~/.bashrc
    

    Windows

    This command downloads kf to current directory. Add it to the path if you want to call if from anywhere other than the current directory.

    gcloud storage cp gs://kf-releases/v2.11.27/kf-windows.exe kf.exe
    
  2. Install the operator:

    kubectl apply -f "https://storage.googleapis.com/kf-releases/v2.11.27/operator.yaml"
  3. Configure the operator for Kf:

    kubectl apply -f "https://storage.googleapis.com/kf-releases/v2.11.27/kfsystem.yaml"
  4. Setup secrets and defaults:

    export CONTAINER_REGISTRY=${COMPUTE_REGION}-docker.pkg.dev/${CLUSTER_PROJECT_ID}/${CLUSTER_NAME}
    
    kubectl patch \
    kfsystem kfsystem \
    --type='json' \
    -p="[{'op': 'replace', 'path': '/spec/kf', 'value': {'enabled': true, 'config': {'spaceContainerRegistry': '${CONTAINER_REGISTRY}', 'secrets':{'workloadidentity':{'googleserviceaccount':'${CLUSTER_NAME}-sa', 'googleprojectid':'${CLUSTER_PROJECT_ID}'}}}}}]"
    

Validate installation

  kf doctor --retries=20

Clean up

These steps should remove all the components created in the Create and prepare a new GKE cluster section.

  1. Delete Google Service Account:

    gcloud iam service-accounts delete ${CLUSTER_NAME}-sa@${CLUSTER_PROJECT_ID}.iam.gserviceaccount.com
  2. Delete IAM policy bindings:

    gcloud projects remove-iam-policy-binding ${CLUSTER_PROJECT_ID} \
        --member="serviceAccount:${CLUSTER_NAME}-sa@${CLUSTER_PROJECT_ID}.iam.gserviceaccount.com" \
        --role="roles/iam.serviceAccountAdmin"
    gcloud projects remove-iam-policy-binding ${CLUSTER_PROJECT_ID} \
        --member="serviceAccount:${CLUSTER_NAME}-sa@${CLUSTER_PROJECT_ID}.iam.gserviceaccount.com" \
        --role="roles/monitoring.metricWriter"
    
  3. Delete container image repository:

    gcloud artifacts repositories delete ${CLUSTER_NAME} \
      --location=${COMPUTE_REGION}
  4. Uninstall Kf:

    kubectl patch kfsystem kfsystem \
      --type='json' \
      -p="[{'op': 'replace', 'path': '/spec/kf', 'value': {'enabled': false, }}]"
    
  5. (Optional) Delete GKE cluster:

    gcloud container clusters delete ${CLUSTER_NAME} --zone ${CLUSTER_LOCATION}