Using Kubernetes Engine to Deploy Apps with Regional Persistent Disks

This tutorial shows how to deploy a highly available app by deploying WordPress using regional persistent disks on Google Kubernetes Engine. Regional persistent disks provide synchronous replication between two zones.

Regional persistent disks provide synchronous replication
between two zones

Objectives

  • Create a regional GKE cluster.
  • Create a Kubernetes StorageClass resource that is configured for replicated zones.
  • Deploy WordPress with a regional disk that uses the StorageClass.
  • Simulate a zone failure by deleting a node.
  • Verify that the WordPress app and data migrate successfully to another replicated zone.

Costs

This tutorial uses the following billable components of Google Cloud:

To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial.

Before you begin

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud Console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  3. Make sure that billing is enabled for your Cloud project. Learn how to confirm that billing is enabled for your project.

  4. Enable the Compute Engine and GKE APIs.

    Enable the APIs

Creating the GKE cluster

  1. Open Cloud Shell:

    OPEN Cloud Shell

    You run the remainder of this tutorial from Cloud Shell.

  2. Create a regional GKE cluster that spans two zones in the us-west1 region:

    CLUSTER_VERSION=$(gcloud container get-server-config \
      --region us-west1 --format='value(validMasterVersions[0])')
    gcloud container clusters create repd \
      --cluster-version=${CLUSTER_VERSION} \
      --machine-type=n1-standard-4 \
      --region=us-west1 \
      --num-nodes=1 \
      --node-locations=us-west1-b,us-west1-c
    

You now have a regional cluster with one node in each zone. The gcloud command has also automatically configured the kubectl command to connect to the cluster.

Deploying the app with a regional disk

In this section, you install Helm, create the Kubernetes StorageClass that is used by the regional persistent disk, and deploy WordPress.

Install and initialize Helm to install the chart package

The chart package, which is installed with Helm, contains everything you need to run WordPress.

  1. Install Helm locally in your Cloud Shell instance:

    curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get-helm-3 > get_helm.sh
    chmod 700 get_helm.sh
    ./get_helm.sh
    
  2. Add the Bitnami repository:

    helm repo add bitnami https://charts.bitnami.com/bitnami
    

Helm is now installed in your cluster.

Create the StorageClass

In this section, you create the StorageClass used by the chart to define the zones of the regional disk. The zones listed in the StorageClass will match the zones of the GKE cluster.

  1. Create a StorageClass for the regional disk:

    kubectl apply -f - <<EOF
    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: repd-west1-b-c
    provisioner: kubernetes.io/gce-pd
    parameters:
      type: pd-ssd
      replication-type: regional-pd
    volumeBindingMode: WaitForFirstConsumer
    allowedTopologies:
    - matchLabelExpressions:
      - key: failure-domain.beta.kubernetes.io/zone
        values:
        - us-west1-b
        - us-west1-c
    EOF
    

You now have a StorageClass that is capable of provisioning PersistentVolumes that are replicated across the us-west1-b and us-west1-c zones.

Deploy WordPress

In this section, Kubernetes automatically attaches the persistent disk to an appropriate node in one of the availability zones.

  1. Deploy the WordPress chart that is configured to use the StorageClass that you created earlier:

    helm install wp-repd \
      --set persistence.storageClass=repd-west1-b-c \
      --set global.storageClass=repd-west1-b-c \
      bitnami/wordpress
    
  2. Run the following command, which waits for the service load balancer's external IP address to be created:

    while [[ -z $SERVICE_IP ]]; \
      do SERVICE_IP=$(kubectl get svc wp-repd-wordpress \
      -o jsonpath='{.status.loadBalancer.ingress[].ip}'); \
      echo "Waiting for service external IP..."; sleep 2; \
    done;
    echo http://$SERVICE_IP/admin
    
  3. Verify that the persistent disk was created:

    while [[ -z $PV ]]; do PV=$(kubectl get pvc \
      wp-repd-wordpress -o jsonpath='{.spec.volumeName}'); \
      echo "Waiting for PV..."; sleep 2; \
    done
    kubectl describe pv $PV
    
  4. Open the WordPress admin page in your browser from the link that the command output displays:

    echo http://$SERVICE_IP/admin
  5. Log in with the username and password that the command output displays:

    cat - <<EOF
    Username: user
    Password: $(kubectl get secret \
      --namespace default wp-repd-wordpress \
      -o jsonpath="{.data.wordpress-password}" | base64 --decode)
    EOF
    

You now have a working deployment of WordPress that is backed by regional persistent disks in two zones.

Simulating a zone failure

In this section, you simulate a zone failure and watch Kubernetes move your workload to the other zone and attach the regional disk to the new node.

If the cluster autoscaler is enabled for the cluster, don't use the procedure described in this section to simulate a zone failure.

  1. Obtain the current node of the WordPress pod:

    NODE=$(kubectl get pods \
      -l app.kubernetes.io/name=wordpress \
      -o jsonpath='{.items..spec.nodeName}')
    
    ZONE=$(kubectl get node $NODE -o jsonpath="{.metadata.labels['failure-domain\.beta\.kubernetes\.io/zone']}")
    
    IG=$(gcloud compute instance-groups list --filter="name~gke-repd-default-pool zone:(${ZONE})" --format='value(name)')
    
    echo "Pod is currently on node ${NODE}"
    
    echo "Instance group to delete: ${IG} for zone: ${ZONE}"
    
  2. Simulate a zone failure by deleting the instance group for the node where the WordPress pod is running:

    gcloud compute instance-groups managed delete ${IG} --zone ${ZONE}
  3. Verify that both the WordPress pod and the persistent volume migrate to the node that is in the other zone:

    kubectl get pods -l app=wp-repd-wordpress -o wide

    Make sure the node that is displayed is different from the node in the previous step.

  4. Open the WordPress admin page in your browser from the link displayed in the command output:

    echo http://$SERVICE_IP/admin

    You have attached a regional persistent disk to a node that is in a different zone.

Cleaning up

To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.

  1. Delete the WordPress app and persistent disk:

    helm uninstall wp-repd
  2. Wait for all persistent volumes to be deleted:

    while [[ $(kubectl get pv  | wc -l) -gt 1 ]]; \
      do echo "Waiting for PV deletion..."; sleep 2; \
    done
  3. Delete the GKE cluster:

    gcloud container clusters delete repd --region=us-west1

What's next