Create a Guestbook with Redis and PHP

This tutorial demonstrates how to build a simple multi-tier web application using Container Engine. The tutorial application is a guestbook that allows visitors to enter text in a log and to see the last few logged entries.

The tutorial shows how to set up the guestbook web service on an external IP with a load balancer and how to run a Redis cluster with a single master and multiple workers.

The example highlights a number of important Container Engine concepts:

Before you begin

Take the following steps to enable the Google Container Engine API:
  1. Visit the Container Engine page in the Google Cloud Platform Console.
  2. Create or select a project.
  3. Wait for the API and related services to be enabled. This can take several minutes.
  4. Enable billing for your project.

    Enable billing

Install the following command-line tools used in this tutorial:

  • gcloud is used to create and delete Container Engine clusters gcloud is included in the Google Cloud SDK.
  • kubectl is used to manage Kubernetes, the cluster orchestration system used by Container Engine. You can install kubectl using gcloud:
    gcloud components install kubectl

Set defaults for the gcloud command-line tool

To save time typing your project ID and Compute Engine zone options in the gcloud command-line tool, you can set default configuration values by running the following commands:
$ gcloud config set project PROJECT_ID
$ gcloud config set compute/zone us-central1-b

Download the configuration files

Visit the Kubernetes GitHub repository to download the configuration files used in this tutorial:

Create a Container Engine cluster

The first step is to create a Container Engine cluster on which you'll run the guestbook application and the Redis service.

Create a container cluster named guestbook with 3 nodes:

gcloud container clusters create guestbook --num-nodes=3

You can list clusters for in your project or get the details for a single cluster using the following commands:

gcloud container clusters list
gcloud container clusters describe guestbook

Run a guestbook app on your cluster

To deploy and run the guestbook application on Container Engine, you must:

  1. Set up a Redis master
  2. Set up Redis workers
  3. Set up the guestbook web frontend
  4. Visit the guestbook website
  5. Scale up the guestbook web frontend

Step 1: Set up a Redis master

The guestbook application uses Redis to store its data. It writes its data to a Redis master instance and reads data from multiple Redis worker (slave) instances. The first step is to deploy a Redis master.

Use the manifest file named redis-master-deployment to deploy the Redis master. This manifest file specifies a Deployment controller that runs a single replica Redis master Pod:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
  # these labels can be applied automatically 
  # from the labels in the pod template if not set
  # labels:
  #   app: redis
  #   role: master
  #   tier: backend
spec:
  # this replicas value is default
  # modify it according to your case
  replicas: 1
  # selector can be applied automatically 
  # from the labels in the pod template if not set
  # selector:
  #   matchLabels:
  #     app: guestbook
  #     role: master
  #     tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/google_containers/redis:e2e  # or just image: redis
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Run the following command to deploy the Redis master:

kubectl create -f redis-master-deployment.yaml

Verify that the Redis master Pod is running kubectl get pods:

$ kubectl get pods
NAME                           READY     STATUS    RESTARTS   AGE
redis-master-343230949-qfvrq   1/1       Running   0          43s

Copy the Pod name from the output of the previous command and run the following command to take a look at the logs from the Redis master Pod:

kubectl logs -f POD-NAME

Create redis-master service

The guestbook application needs to communicate to Redis master to write its data. You need to create a Service to proxy the traffic to the Redis master Pod.

Service is a Kubernetes abstraction which defines a logical set of Pods and a policy by which to access them. It is effectively a named load balancer that proxies traffic to one or more Pods. When you set up a Service, you tell it the Pods to proxy based on Pod labels.

Take a look at the redis-master-service.yaml manifest file describing a Service resource for the Redis master:

apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
    # the port that this service should serve on
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

This manifest file creates a Service named redis-master with a set of label selectors. These labels match the set of labels that are deployed in the previous step. Therefore, this Service routes the network traffic to the Redis master Pod created in Step 1.

The ports section of the manifest declares a single port mapping. In this case, the Service will route the traffic on port: 6379 to the targetPort: 6379 of the containers that match the specified selector labels. Note that the containerPort used in the Deployment must match the targetPort to route traffic to the Deployment.

Start up the Redis master's Service by running:

kubectl create -f redis-master-service.yaml

Verify that the service is created:

$ kubectl get service
NAME           CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
kubernetes     10.51.240.1             443/TCP    42s
redis-master   10.51.242.233           6379/TCP   12s

Step 2: Set up Redis workers

Although the Redis master is a single pod, you can make it highly available and meet traffic demands by adding a few Redis worker replicas.

Take a look at the redis-slave-deployment.yaml manifest file describing a Deployment for the Redis worker pods:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
  # these labels can be applied automatically
  # from the labels in the pod template if not set
  # labels:
  #   app: redis
  #   role: slave
  #   tier: backend
spec:
  # this replicas value is default
  # modify it according to your case
  replicas: 2
  # selector can be applied automatically
  # from the labels in the pod template if not set
  # selector:
  #   matchLabels:
  #     app: guestbook
  #     role: slave
  #     tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google_samples/gb-redisslave:v1
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below.
          # value: env
        ports:
        - containerPort: 6379

Deployment is responsible for achieving the configuration declared in the manifest file. For example, this manifest file defines two replicas for the Redis workers. If there are not any replicas running, Deployment would start the two replicas on your container cluster. If there are more than two replicas are running, it would kill some to meet the specified configuration.

In this case, the Deployment object specifies two replicas. To create the Redis worker Deployment, run:

kubectl create -f redis-slave-deployment.yaml

Verify that the two Redis worker replicas are running by querying the list of Pods:

$ kubectl get pods
NAME                           READY     STATUS    RESTARTS   AGE
redis-master-343230949-qfvrq   1/1       Running   0          17m
redis-slave-132015689-dp23k    1/1       Running   0          2s
redis-slave-132015689-xq9v0    1/1       Running   0          2s

Create redis-slave service

The guestbook application needs to communicate to Redis workers to read data. To make the Redis workers discoverable, you need to set up a Service. A Service provides transparent load balancing to a set of Pods.

The redis-slave-service.yaml defines the Service configuration for the Redis workers:

apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
    # the port that this service should serve on
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

This file defines a Service named redis-slave running on port 6379. Note that the selector field of the Service matches the Redis worker Pods created in the previous step.

Create the redis-slave Service by running:

kubectl create -f redis-slave-service.yaml

Verify that the Service is created:

$ kubectl get service
NAME           CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
kubernetes     10.51.240.1             443/TCP    1m
redis-master   10.51.242.233           6379/TCP   49s
redis-slave    10.51.247.238           6379/TCP   3s

Step 3: Set up the guestbook web frontend

Now that you have the Redis storage of your guestbook up and running, start the guestbook web servers. Like the Redis workers, this is a replicated application managed by a Deployment.

This tutorial uses a simple PHP frontend. It is configured to talk to either the Redis worker or master Services, depending on whether the request is a read or a write. It exposes a simple JSON interface, and serves a jQuery-Ajax-based UX.

Take a look at the frontend-deployment.yaml manifest file describing the Deployment for the guestbook web server:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
  # these labels can be applied automatically
  # from the labels in the pod template if not set
  # labels:
  #   app: guestbook
  #   tier: frontend
spec:
  # this replicas value is default
  # modify it according to your case
  replicas: 3
  # selector can be applied automatically
  # from the labels in the pod template if not set
  # selector:
  #   matchLabels:
  #     app: guestbook
  #     tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v4
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below.
          # value: env
        ports:
        - containerPort: 80

To create the guestbook web frontend Deployment, run:

kubectl create -f frontend-deployment.yaml

Verify the three replicas are running by querying the list of the labels that identify the web frontend:

$ kubectl get pods -l app=guestbook -l tier=frontend

NAME                      READY     STATUS    RESTARTS   AGE
frontend-88237173-5p257   1/1       Running   0          40s
frontend-88237173-84036   1/1       Running   0          40s
frontend-88237173-j3rvr   1/1       Running   0          40s

The manifest file above specifies the environment variable GET_HOSTS_FROM=dns. When guestbook web frontend application is provided this configuration, it uses hostnames redis-slave and redis-master and performs a DNS lookup to find IP addresses of the respective Services you created in the previous steps. This concept is called DNS service discovery.

Expose frontend on an external IP address

The redis-slave and redis-master Services you created in the previous steps are only accessible within the container cluster, because the default type for a Service is ClusterIP. ClusterIP provides a single IP address for the set of Pods the Service is pointing to. This IP address is accessible only within the cluster.

However, you need the guestbook web frontend Service to be externally visible. That is, you want a client to be able to request the Service from outside the container cluster. To accomplish this, you need to specify type: LoadBalancer in the Service configuration. The frontend-service.yaml manifest file specifying this configuration looks like this:

apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
    # the port that this service should serve on
  - port: 80
  selector:
    app: guestbook
    tier: frontend

When the frontend Service is created, Container Engine creates a load balancer and an external IP address. Note that these resources are subject to billing. The port declaration under the ports section specifies port: 80 and the targetPort is not specified. When you omit the targetPort property, it defaults to the value of the port field. In this case, this Service will route external traffic on port 80 to the port 80 of the containers in the frontend Deployment.

To create the Service, first, uncomment the following line in the frontend-service.yaml file:

type: LoadBalancer

Then, create the service by running:

kubectl create -f frontend-service.yaml

Step 4: Visit the guestbook website

To access the guestbook Service, you need to find the external IP of the Service you just set up by running the command:

$ kubectl get service frontend
NAME       CLUSTER-IP      EXTERNAL-IP        PORT(S)        AGE
frontend   10.51.242.136   109.197.92.229     80:32372/TCP   1m

Copy the IP address in EXTERNAL-IP column, and load the page in your browser:

Guestbook running on Container Engine

Congratulations! Try adding some guestbook entries.

Step 5: Scaling up the web frontend

Suppose your guestbook app has been running for a while, and it gets a sudden burst of publicity. You decide it would be a good idea to add more web servers to your frontend. You can do this easily, since your servers are defined as a service that uses a Deployment controller.

Scale up the number of your frontend Pods to 5 by running:

kubectl scale deployment frontend --replicas=5

The configuration for the Deployment is updated to specify that there should be 5 replicas running now. The Deployment adjusts the number of Pods it is running to match that. To verify, run the following command:

$ kubectl get pods
NAME                           READY     STATUS    RESTARTS   AGE
frontend-88237173-3s3sc        1/1       Running   0          1s
frontend-88237173-twgvn        1/1       Running   0          1s
frontend-88237173-5p257        1/1       Running   0          23m
frontend-88237173-84036        1/1       Running   0          23m
frontend-88237173-j3rvr        1/1       Running   0          23m
redis-master-343230949-qfvrq   1/1       Running   0          54m
redis-slave-132015689-dp23k    1/1       Running   0          37m
redis-slave-132015689-xq9v0    1/1       Running   0          37m

Once your site has fallen back into obscurity, you can ramp down the number of web server Pods in the same manner.

Step 6: Cleanup

After completing this tutorial, follow these steps to remove the following resources to prevent unwanted charges incurring on your account:

  1. Delete the Service: This step will deallocate the Cloud Load Balancer created for the frontend Service:

    kubectl delete service frontend
    
  2. Wait for the Load Balancer provisioned for the frontend Service to be deleted: The load balancer is deleted asynchronously in the background when you run kubectl delete. Wait until the load balancer is deleted by watching the output of the following command:

    gcloud compute forwarding-rules list
    
  3. Delete the container cluster: This step will delete the resources that make up the container cluster, such as the compute instances, disks and network resources.

    gcloud container clusters delete guestbook
    

Send feedback about...

Container Engine Documentation