Continuous Delivery for Helm Charts on Kubernetes Engine using Concourse

This tutorial shows how to create a software release process with Kubernetes Engine, Helm, and Concourse. Helm is a tool to help you manage Kubernetes manifests. Concourse takes advantage of Helm to continuously deploy your applications.

Following this tutorial results in:

  • Two source code repositories, one for your application source code and another for your Helm chart.
  • Your application, packaged as a Docker container and installed and configured as a Helm Chart in your cluster.

You can push a Git tag to either of the two repositories to start a release.

Architecture

The following diagram outlines the system's high-level architecture.

High-level architecture

Concourse assembles continuous delivery pipelines that you can use to codify the steps of your build, test, and release processes. In Concourse, the stages of pipelines are called jobs. Each job can take a resource as an input and can create a resource as an output.

In this tutorial, you use the following Concourse resource types:

You can use Helm to make your Kubernetes manifests into templates and then configure, install, and upgrade them as as a unit. You must define your application as a chart for Helm to install it. Each Helm chart has a values file that you can use to parameterize your manifests. For example, you might have a value that defines the Docker image to use for your application deployment. This value might change each time you install or upgrade your chart. Each installation of a chart is called a release. You use releases to upgrade or roll back an instance of your application.

You can share charts by using a repository. In this tutorial, you create a private chart repository using Cloud Storage.

The following diagram shows the relationship between charts, repositories, and releases:

interrelationships of charts, repositories, and releases

In this tutorial, you build the continuous delivery pipeline shown in the following screenshot from Concourse. Inputs and outputs are shown in dark boxes. Pipeline jobs are shown in green boxes.

Continuous delivery pipeline

As the screenshot shows, the build-image job takes the application source code (app-source) as an input. When a new tag is pushed to the application source code, Concourse runs the build-image job. The build process checks out the code, performs a Docker build, and then pushes the image to Container Registry, labeling it as an output in the pipeline called app-image. The application Docker image is an input to the next job in the pipeline, deploy-chart. The deploy-chart job also takes the chart source code (chart-source) as an input.

When both the application image and the chart source code are available, the deploy-chart job starts and does the following:

  • Gets information about the application image, such as the repository and tag.
  • Checks out the chart source code from Cloud Source Repositories.
  • Packages the chart using the Helm client.
  • Uploads the packaged chart to your private chart repository in Cloud Storage using the Helm plugin for Cloud Storage.
  • Installs the chart in your Kubernetes Engine cluster using the Concourse Helm Resource, which is called dev-site in the previous screenshot.

Objectives

  • Open Cloud Shell, create a Kubernetes Engine cluster, and configure your identity and user management scheme.
  • Download a sample application, then create a Git repository and upload it to Cloud Source Repositories.
  • Use Helm to deploy Concourse to Kubernetes Engine.
  • Configure a Concourse pipeline to install and update your application.
  • Deploy a code change to your application to trigger the pipeline.
  • Deploy a change to your Helm chart to roll it out to your cluster.

Costs

This tutorial uses billable components of GCP, including the following:

  • Kubernetes Engine
  • Cloud Storage

Use the Pricing Calculator to generate a cost estimate based on your projected usage.

Before you begin

  1. Select or create a GCP project.

    Go to the Manage resources page

  2. Make sure that billing is enabled for your project.

    Learn how to enable billing

  3. Enable the Kubernetes Engine, Cloud Source Repositories, and Compute Engine APIs.

    Enable the APIs

Setting up your environment

Configure the infrastructure and identities that you need to complete the tutorial.

Start a Cloud Shell instance

Open Cloud Shell from Google Cloud Platform Console. You execute the rest of the tutorial from inside Cloud Shell.

Open Cloud Shell

Create a Kubernetes Engine cluster

You need a cluster with the cloud-source-repos-ro and storage-full scopes so that your Concourse pods can download your source code, push images, and upload charts to the Cloud Storage chart repository.

Create a Kubernetes Engine cluster to deploy Concourse and the sample Helm chart with the following command:

gcloud container clusters create concourse --image-type ubuntu \
    --machine-type n1-standard-2 --zone us-central1-f \
    --scopes cloud-source-repos-ro,storage-full

Download the sample code

You use an existing application, Helm chart, and pipeline in this tutorial. Download the sample code from a Cloud Storage bucket created for this tutorial, and then extract it into Cloud Shell:

wget https://gke-concourse.storage.googleapis.com/sample-app-v4.zip
unzip sample-app-v4.zip
cd concourse-continuous-delivery-master

Deploying Concourse using Helm

Use Helm to deploy Concourse from the Charts repository.

Install Helm

  1. In Cloud Shell, download and install the Helm binary:

    wget https://storage.googleapis.com/kubernetes-helm/helm-v2.6.2-linux-amd64.tar.gz

  2. Extract the file:

    tar zxfv helm-v2.6.2-linux-amd64.tar.gz
    cp linux-amd64/helm .

  3. Ensure your user account has the cluster-admin role in your cluster.

    kubectl create clusterrolebinding user-admin-binding --clusterrole=cluster-admin --user=$(gcloud config get-value account)

  4. Create a service account that Tiller, the server side of Helm, can use for deploying your charts.

    kubectl create serviceaccount tiller --namespace kube-system

  5. Grant the Tiller service account the cluster-admin role in your cluster:

    kubectl create clusterrolebinding tiller-admin-binding --clusterrole=cluster-admin --serviceaccount=kube-system:tiller

  6. Grant Concourse's service account the cluster-admin role so it can deploy resources across all namespaces:

    kubectl create clusterrolebinding --clusterrole=cluster-admin --serviceaccount=default:default concourse-admin

  7. Initialize Helm to install Tiller in your cluster:

    ./helm init --service-account=tiller
    ./helm update

  8. Install the Helm plugin and initialize your private chart repository in a Cloud Storage bucket. The first command stores your current project ID in an environment variable. The second command uses the project ID to create a unique name for the bucket. The name is then stored in another environment variable for use in subsequent commands:

    export PROJECT=$(gcloud info --format='value(config.project)')
    export BUCKET=$PROJECT-helm-repo
    ./helm plugin install https://github.com/viglesiasce/helm-gcs.git --version v0.1.1
    gsutil mb -l us-central1 gs://$BUCKET
    ./helm gcs init gs://$BUCKET

  9. Ensure that Helm is properly installed:

    ./helm version

    You see output similar to the following. If Helm is correctly installed, v2.6.2 appears for both client and server.

    Client: &version.Version{SemVer:"v2.6.2",
    GitCommit:"012cb0ac1a1b2f888144ef5a67b8dab6c2d45be6",
    GitTreeState:"clean"}Server: &version.Version{SemVer:"v2.6.2",
    GitCommit:"012cb0ac1a1b2f888144ef5a67b8dab6c2d45be6",
    GitTreeState:"clean"}

    If you see a message that Helm couldn't contact the Tiller process, wait a short while and then try again. It can take Tiller up to 2 minutes to initialize.

Deploy Concourse

  1. In Cloud Shell, create a file with the configuration values to pass to the Concourse installation. The first command uses the openssl command to generate a random string that is used as the Concourse admin user's password:

    export PASSWORD=$(openssl rand -base64 15)
    cat > concourse.yaml <<EOF
    concourse:
      password: $PASSWORD
      baggageclaimDriver: overlay
    web:
      service:
        type: LoadBalancer
    EOF

  2. Use Helm to deploy the Concourse chart:

    ./helm install stable/concourse --name concourse -f concourse.yaml --version 0.10.0

  3. Wait 3 to 4 minutes, then verify that the concourse-web pod is running and ready:

    kubectl get pods -l app=concourse-web

    When the pod is ready, you see output similar to the following:

    NAME                             READY     STATUS    RESTARTS   AGE
    concourse-web-2711520304-483vw   1/1       Running   0          3m

  4. Download fly, the Concourse command-line tool:

    export SERVICE_IP=$(kubectl get svc \
        --namespace default concourse-web \
        -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    wget -O fly "http://$SERVICE_IP:8080/api/v1/cli?arch=amd64&platform=linux"
    chmod +x fly

  5. Authenticate the fly CLI to your Concourse installation:

    ./fly -t local login -u concourse -p $PASSWORD -c http://$SERVICE_IP:8080

  6. Get the URL of your Concourse user interface:

    export SERVICE_IP=$(kubectl get svc \
        --namespace default concourse-web \
        -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    printf "Concourse URL: [http://$SERVICE_IP:8080]\nUsername: concourse\nPassword: $PASSWORD\n"

    This command outputs a URL, username, and password.

  7. In your browser, go to the URL from the previous command. This takes you to Concourse.

  8. In Concourse, click the login button in the top right corner, and then click main.
  9. Log in with the username and password from the previous command.

Configure identity and access management

You create a Cloud Identity Access Management (Cloud IAM) service account to permit Concourse to push images to Container Registry.

  1. In Cloud Shell, create the service account:

    gcloud iam service-accounts create concourse --display-name concourse

  2. Store the service account email address and your current project ID in environment variables for use in later commands:

    export SA_EMAIL=$(gcloud iam service-accounts list \
        --filter="displayName:concourse" --format='value(email)')
    export PROJECT=$(gcloud info --format='value(config.project)')

  3. Bind the storage.admin role to your service account:

    gcloud projects add-iam-policy-binding $PROJECT \
        --role roles/storage.admin --member serviceAccount:$SA_EMAIL

  4. Download the service account key. In a later step, you install Concourse and upload this key to Kubernetes Engine.

    gcloud iam service-accounts keys create concourse-sa.json \
        --iam-account $SA_EMAIL

Deploying your application

In this section, you create your source code repositories, configure and create your pipeline, deploy your application, change it, and then deploy it again.

Create source code repositories

In this tutorial, you use two repositories, one for handling application source code and the other for the source code that defines your Helm chart. Changes to either of these repositories will update your deployment.

  1. Create the repositories:

    gcloud source repos create chart-source
    gcloud source repos create app-source

  2. Set the username and email address for your Git commits in these repositories. Replace [EMAIL_ADDRESS] with your Git email address, and replace [USERNAME] with your Git username.

    git config --global user.email "[EMAIL_ADDRESS]"
    git config --global user.name "[USERNAME]"

  3. Push the chart and application source code from your local repository to Cloud Source Repositories:

    export PROJECT=$(gcloud info --format='value(config.project)')
    for repo in app-source chart-source; do
    cd $repo
    git init && git add . && git commit -m 'Initial commit'
    git config credential.helper gcloud.sh
    git remote add google \
        https://source.developers.google.com/p/$PROJECT/r/$repo
        git push --all google
    cd ..
    done

Configure and create the pipeline

Concourse allows you to interpolate parameters into your pipelines. You use this functionality to parameterize your pipeline as well as to inject secrets that you don't want checked into your source code repository.

  1. In Cloud Shell, create your configuration file:

    export PROJECT=$(gcloud info --format='value(config.project)')
    export BUCKET=$PROJECT-helm-repo
    export TOKEN_SECRET=$(kubectl get serviceaccount default -o jsonpath="{.secrets[0].name}")
    export CLUSTER_CA=$(kubectl get secret $TOKEN_SECRET -o jsonpath='{.data.ca\.crt}')
    export TOKEN=$(kubectl get secret $TOKEN_SECRET -o jsonpath='{.data.token}' | base64 --decode)

    cat > params.yaml <<EOF chart_name: nginx release_name: dev-site bucket: $BUCKET cluster_ca: $CLUSTER_CA token: $TOKEN project: $PROJECT service_account_json: '$(cat concourse-sa.json)' EOF

  2. Using the fly CLI in Cloud Shell, upload the pipeline file:

    ./fly -t local set-pipeline -p dev-site-deploy \
        -c pipeline.yaml -l params.yaml -n

  3. Unpause the pipeline to enable it:

    ./fly -t local unpause-pipeline -p dev-site-deploy

  4. Refresh Concourse in your browser.

    You see your pipeline:

    Refreshed pipeline

Deploying your application for the first time

The pipeline is configured to be triggered when a Git tag is pushed to either the application or chart source code repository. Push the first tags in each repository to deploy your application for the first time.

  1. In Cloud Shell, create and push the Git tags:

    for repo in app-source chart-source; do
        cd $repo
        git tag v1.0.0
        git push google --tags
        cd ..
    done

  2. Go to Concourse in your browser. After 1-2 minutes, you see that the pipeline has started, denoted by the pulsating yellow circles around the build-image job:

    Pipeline running

Deploying a change to the application

  1. After the pipeline has completed, check the version of the web server that is currently deployed by forwarding a local port to a container and getting the version:

    export POD_NAME=$(kubectl get pods --namespace default \
        -l "app=nginx,release=dev-site" \
        -o jsonpath="{.items[0].metadata.name}")
    kubectl port-forward $POD_NAME 8080:80 &
    curl -is localhost:8080 | grep 'Server\|color'

    You see that the web server is running the stable version of nginx and the color is set to blue:

    Server: nginx/1.14.0
    <h1 style="color:blue;">Welcome to the sample app!</h1>

  2. Stop port forwarding:

    killall kubectl

  3. Change to the application subdirectory:

    cd app-source

  4. Change the web server version used in the Dockerfile from stable to latest:

    sed -i s/stable/latest/ Dockerfile

  5. Commit and tag the source:

    git add Dockerfile
    git commit -m 'Use latest NGINX'
    git tag v2.0.0

  6. Push the code to Cloud Source Repositories:

    git push google --mirror

  7. Using the fly CLI in Cloud Shell, force Concourse to check the source code for new tags immediately:

    ../fly -t local check-resource -r dev-site-deploy/app-source

    Your pipeline starts again.

  8. When the pipeline completes, re-run the following commands to check the nginx version:

    export POD_NAME=$(kubectl get pods --namespace default \
        -l "app=nginx,release=dev-site" \
        -o jsonpath="{.items[0].metadata.name}")
    kubectl port-forward $POD_NAME 8080:80 &
    curl -is localhost:8080 | grep 'Server\|color'

    The nginx version is upgraded to the latest version:

    Server: nginx/1.15.0
    <h1 style="color:blue;">Welcome to the sample app!</h1>

  9. Stop port forwarding:

    killall kubectl

Next, you make a change to the chart configuration to trigger an update.

Deploying a change to the chart

The chart definition contains a Kubernetes Config Map that defines your application's default page. You update that Config Map to make a change to the application's index page.

  1. In Cloud Shell, go to the chart subdirectory:

    cd ../chart-source/

  2. Change the color of the heading from blue to green by replacing the color string in the Config Map:

    sed -i s/blue/green/ templates/config-map.yaml

  3. Use Git to commit and tag the source code:

    git add templates/config-map.yaml
    git commit -m 'Use green for page heading'

  4. Tag and push the code to start your pipeline:

    git tag v2.0.0
    git push google --mirror

  5. Use the fly command to force Concourse to check the source code for new tags immediately:

    ../fly -t local check-resource -r dev-site-deploy/chart-source

    In Concourse, your pipeline starts from the chart deployment job.

  6. In Cloud Shell, check the output of the application:

    export POD_NAME=$(kubectl get pods \
        -n default -l "app=nginx,release=dev-site" \
        -o jsonpath="{.items[0].metadata.name}")
    kubectl port-forward $POD_NAME 8080:80 &
    curl -is localhost:8080 | grep 'Server\|color'

    You see the following output. This time, the server responds with markup that uses green instead of blue:

    Server: nginx/1.15.0
    <h1 style="color:green;">Welcome to the sample app!</h1>

Cleaning up

The following steps remove all resources created by this tutorial so that you don't incur unnecessary charges.

  1. Delete the Concourse installation:

    ../helm delete --purge concourse

  2. Delete the service account:

    export SA_EMAIL=$(gcloud iam service-accounts list \
        --filter="displayName:concourse" \
        --format='value(email)')
    gcloud iam service-accounts delete $SA_EMAIL

  3. Delete the Kubernetes Engine cluster:

    gcloud container clusters delete concourse --zone us-central1-f

  4. Delete the source code repositories:

    gcloud source repos delete app-source --quiet
    gcloud source repos delete chart-source --quiet

  5. Delete the bucket:

    export PROJECT=$(gcloud info --format='value(config.project)')
    export BUCKET=$PROJECT-helm-repo
    gsutil -m rm -r gs://$BUCKET

What's next

Was this page helpful? Let us know how we did:

Send feedback about...