Using Kubernetes Engine to Deploy Apps with Regional Persistent Disks

This tutorial shows how to deploy a highly available app by deploying WordPress using regional persistent disks on Google Kubernetes Engine. Regional persistent disks provide synchronous replication between two zones.

Regional persistent disks provide synchronous replication
between two zones

Objectives

  • Create a regional GKE cluster.
  • Create a Kubernetes StorageClass resource that is configured for replicated zones.
  • Deploy WordPress with a regional disk that uses the StorageClass.
  • Simulate a zone failure by deleting a node.
  • Verify that the WordPress app and data migrate successfully to another replicated zone.

Costs

This tutorial uses the following billable components of Google Cloud:

To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial.

Before you begin

  1. Sign in to your Google Account.

    If you don't already have one, sign up for a new account.

  2. In the Cloud Console, on the project selector page, select or create a Cloud project.

    Go to the project selector page

  3. Make sure that billing is enabled for your Google Cloud project. Learn how to confirm billing is enabled for your project.

  4. Enable the Compute Engine and GKE APIs.

    Enable the APIs

Creating the GKE cluster

  1. Open Cloud Shell:

    OPEN Cloud Shell

    You run the remainder of this tutorial from Cloud Shell.

  2. Create a regional GKE cluster that spans two zones in the us-west1 region:

    CLUSTER_VERSION=$(gcloud beta container get-server-config --region us-west1 --format='value(validMasterVersions[0])')
    
    export CLOUDSDK_CONTAINER_USE_V1_API_CLIENT=false
    gcloud beta container clusters create repd \
      --cluster-version=${CLUSTER_VERSION} \
      --machine-type=n1-standard-4 \
      --region=us-west1 \
      --num-nodes=1 \
      --node-locations=us-west1-b,us-west1-c
    

You now have a regional cluster with one node in each zone. The gcloud command has also automatically configured the kubectl command to connect to the cluster.

Deploying the app with a regional disk

In this section, you install Helm, create the Kubernetes StorageClass that is used by the regional persistent disk, and deploy WordPress.

Install and initialize Helm to install the chart package

The chart package, which is installed with Helm, contains everything you need to run WordPress.

  1. Install Helm locally in your Cloud Shell instance:

    curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
    chmod 700 get_helm.sh
    ./get_helm.sh
    
  2. Initialize Helm:

    kubectl create serviceaccount tiller --namespace kube-system
    kubectl create clusterrolebinding tiller-cluster-rule \
      --clusterrole=cluster-admin \
      --serviceaccount=kube-system:tiller
    helm init --service-account=tiller
    until (helm version --tiller-connection-timeout=1 >/dev/null 2>&1); do echo "Waiting for tiller install..."; sleep 2; done && echo "Helm install complete"
    

Helm is now installed in your cluster.

Create the StorageClass

In this section, you create the StorageClass used by the chart to define the zones of the regional disk. The zones listed in the StorageClass will match the zones of the GKE cluster.

  1. Create a StorageClass for the regional disk:

    kubectl apply -f - <<EOF
    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: repd-west1-b-c
    provisioner: kubernetes.io/gce-pd
    parameters:
      type: pd-standard
      replication-type: regional-pd
      zones: us-west1-b, us-west1-c
    EOF
    

You now have a StorageClass that is capable of provisioning PersistentVolumes that are replicated across the us-west1-b and us-west1-c zones.

Deploy WordPress

In this section, Kubernetes automatically attaches the persistent disk to an appropriate node in one of the availability zones.

  1. Deploy the WordPress chart that is configured to use the StorageClass that you created earlier:

    helm install --name wp-repd \
      --set persistence.storageClass=repd-west1-b-c \
      stable/wordpress \
      --set mariadb.persistence.storageClass=repd-west1-b-c
    
  2. Run the following command, which waits for the service load balancer's external IP address to be created:

    while [[ -z $SERVICE_IP ]]; do SERVICE_IP=$(kubectl get svc wp-repd-wordpress -o jsonpath='{.status.loadBalancer.ingress[].ip}'); echo "Waiting for service external IP..."; sleep 2; done; echo http://$SERVICE_IP/admin
    
  3. Verify that the persistent disk was created:

    while [[ -z $PV ]]; do PV=$(kubectl get pvc wp-repd-wordpress -o jsonpath='{.spec.volumeName}'); echo "Waiting for PV..."; sleep 2; done
    
    kubectl describe pv $PV
    
  4. Open the WordPress admin page in your browser from the link that the command output displays:

    echo http://$SERVICE_IP/admin
  5. Log in with the username and password that the command output displays:

    cat - <<EOF
    Username: user
    Password: $(kubectl get secret --namespace default wp-repd-wordpress -o jsonpath="{.data.wordpress-password}" | base64 --decode)
    EOF
    

You now have a working deployment of WordPress that is backed by regional persistent disks in two zones.

Simulating a zone failure

In this section, you simulate a zone failure and watch Kubernetes move your workload to the other zone and attach the regional disk to the new node.

  1. Obtain the current node of the WordPress pod:

    NODE=$(kubectl get pods -l app=wp-repd-wordpress -o jsonpath='{.items..spec.nodeName}')
    
    ZONE=$(kubectl get node $NODE -o jsonpath="{.metadata.labels['failure-domain\.beta\.kubernetes\.io/zone']}")
    
    IG=$(gcloud compute instance-groups list --filter="name~gke-repd-default-pool zone:(${ZONE})" --format='value(name)')
    
    echo "Pod is currently on node ${NODE}"
    
    echo "Instance group to delete: ${IG} for zone: ${ZONE}"
    
  2. Simulate a zone failure by deleting the instance group for the node where the WordPress pod is running:

    gcloud compute instance-groups managed delete ${IG} --zone ${ZONE}
  3. Verify that both the WordPress pod and the persistent volume migrate to the node that is in the other zone:

    kubectl get pods -l app=wp-repd-wordpress -o wide

    Make sure the node that is displayed is different from the node in the previous step.

  4. Open the WordPress admin page in your browser from the link displayed in the command output:

    echo http://$SERVICE_IP/admin

    You have attached a regional persistent disk to a node that is in a different zone.

Cleaning up

To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial:

  1. Delete the WordPress app and persistent disk:

    helm delete --purge wp-repd
  2. Wait for all persistent volumes to be deleted:

    while [[ $(kubectl get pv  | wc -l) -gt 0 ]]; do echo "Waiting for PV deletion..."; sleep 2; done
  3. Delete the GKE cluster:

    export CLOUDSDK_CONTAINER_USE_V1_API_CLIENT=false
    gcloud beta container clusters delete repd --region=us-west1

What's next