The example highlights a number of important GKE concepts:
ObjectivesTo deploy and run the guestbook application on GKE:
- Set up the Redis leader
- Set up two Redis followers
- Set up the guestbook web frontend
- Visit the guestbook website
- Scale up the guestbook web frontend
Before you beginTake the following steps to enable the Kubernetes Engine API:
- Visit the Kubernetes Engine page in the Google Cloud Console.
- Create or select a project.
- Wait for the API and related services to be enabled. This can take several minutes.
Make sure that billing is enabled for your Cloud project. Learn how to confirm that billing is enabled for your project.
Install the following command-line tools used in this tutorial:
gcloudis used to create and delete Kubernetes Engine clusters.
gcloudis included in the Google Cloud SDK.
kubectlis used to manage Kubernetes, the cluster orchestration system used by Kubernetes Engine. You can install
gcloud components install kubectl
Set defaults for the
To save time typing your project ID
and Compute Engine zone options in the
gcloud command-line tool
gcloudcommand-line tool, you can set the defaults:
gcloud config set project project-id gcloud config set compute/zone compute-zone
Create a GKE cluster
The first step is to create a GKE cluster on which you'll run the guestbook application and the Redis service.
Create a GKE cluster named
gcloud container clusters create guestbook --num-nodes=4
You can list the clusters running in your project using the following commands:
gcloud container clusters list gcloud container clusters describe guestbook
Setting up the Redis leader
The guestbook application uses Redis to store its data. The application writes its data to a Redis leader instance and reads data from multiple Redis follower instances. The first step is to deploy a Redis leader.
First, clone the sample manifests:
git clone https://github.com/GoogleCloudPlatform/kubernetes-engine-samples cd kubernetes-engine-samples/guestbook git checkout abbb383
Use the manifest file named
redis-leader-deployment to deploy the
Redis leader. This manifest file specifies a Kubernetes Deployment that runs a
single replica Redis leader Pod:
apiVersion: apps/v1 kind: Deployment metadata: name: redis-leader labels: app: redis role: leader tier: backend spec: replicas: 1 selector: matchLabels: app: redis template: metadata: labels: app: redis role: leader tier: backend spec: containers: - name: leader image: "docker.io/redis:6.0.5" resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379
Run the following command to deploy the Redis leader:
kubectl apply -f redis-leader-deployment.yaml
Verify that the Redis leader Pod is running:
kubectl get podsOutput:
NAME READY STATUS RESTARTS AGE redis-leader-343230949-qfvrq 1/1 Running 0 43s
Run the following command to look at the logs from the Redis leader Pod:
kubectl logs deployment/redis-leader
1:M 24 Jun 2020 14:48:20.917 * Ready to accept connections
Create the Redis leader service
The guestbook application needs to communicate with the Redis leader to write its data. You can create a Service to proxy the traffic to the Redis leader Pod.
A Service is a Kubernetes abstraction which defines a logical set of Pods and a policy by which to access the Pods. The Service is effectively a named load balancer that proxies traffic to one or more Pods. When you set up a Service, you describe which Pods to proxy based on Pod labels.
redis-leader-service.yaml manifest file describing a
Service resource for the Redis leader:
apiVersion: v1 kind: Service metadata: name: redis-leader labels: app: redis role: leader tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: leader tier: backend
This manifest file creates a Service named
redis-leader with a set of label
selectors. These labels match the set of labels that are deployed in the
previous step. Therefore, this Service routes the network traffic to the Redis
leader Pod created in a previous step.
ports section of the manifest declares a single port mapping. In this
case, the Service routes the traffic on
port: 6379 to the
6379 of the containers that match the specified
selector labels. Note that
containerPort used in the Deployment must match the
targetPort to route
traffic to the Deployment.
Start the Redis leader Service by running:
kubectl apply -f redis-leader-service.yaml
Verify that the Service is created:
kubectl get serviceOutput:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.51.240.1 <none> 443/TCP 42s redis-leader 10.51.242.233 <none> 6379/TCP 12s
Setting up Redis followers
Although the Redis leader is a single Pod, you can make it highly available and meet traffic demands by adding a few Redis followers, or replicas.
redis-follower-deployment.yaml manifest file describing a
Deployment for the Redis follower Pods:
apiVersion: apps/v1 kind: Deployment metadata: name: redis-follower labels: app: redis role: follower tier: backend spec: replicas: 2 selector: matchLabels: app: redis template: metadata: labels: app: redis role: follower tier: backend spec: containers: - name: follower image: gcr.io/google_samples/gb-redis-follower:v2 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379
To create the Redis follower Deployment, run:
kubectl apply -f redis-follower-deployment.yaml
Verify that the two Redis follower replicas are running by querying the list of Pods:
kubectl get podsOutput:
NAME READY STATUS RESTARTS AGE redis-follower-76588f55b7-bnsq6 1/1 Running 0 27s redis-follower-76588f55b7-qvtws 1/1 Running 0 27s redis-leader-dd446dc55-kl7nl 1/1 Running 0 119s
Get the Pod logs for one of the Redis followers.
kubectl logs deployment/redis-followerOutput:
1:M 24 Jun 2020 14:50:43.617 * Background saving terminated with success 1:M 24 Jun 2020 14:50:43.617 * Synchronization with replica 10.12.3.4:6379 succeeded
Create the Redis follower service
The guestbook application needs to communicate with the Redis followers to read data. To make the Redis followers discoverable, you must set up another Service.
redis-follower-service.yaml defines the Service configuration for the Redis
apiVersion: v1 kind: Service metadata: name: redis-follower labels: app: redis role: follower tier: backend spec: ports: # the port that this service should serve on - port: 6379 selector: app: redis role: follower tier: backend
This file defines a Service named
redis-follower running on port 6379. Note that
selector field of the Service matches the Redis follower Pods created in the
redis-follower Service by running:
kubectl apply -f redis-follower-service.yaml
Verify that the Service is created:
kubectl get serviceOutput:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.51.240.1 <none> 443/TCP 1m redis-leader 10.51.242.233 <none> 6379/TCP 49s redis-follower 10.51.247.238 <none> 6379/TCP 3s
Setting up the guestbook web frontend
Now that you have the Redis storage of your guestbook up and running, start the guestbook web servers. Like the Redis followers, the frontend is deployed using a Kubernetes Deployment.
The guestbook app uses a PHP frontend. It is configured to communicate with either the Redis follower or leader Services, depending on whether the request is a read or a write. The frontend exposes a JSON interface, and serves a jQuery-Ajax-based UX.
frontend-deployment.yaml manifest file describing the
Deployment for the guestbook web server:
apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google_samples/gb-frontend:v5 env: - name: GET_HOSTS_FROM value: "dns" resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80
To create the guestbook web frontend Deployment, run:
kubectl apply -f frontend-deployment.yaml
Verify that the three replicas are running by querying the list of the labels that identify the web frontend:
kubectl get pods -l app=guestbook -l tier=frontendOutput:
NAME READY STATUS RESTARTS AGE frontend-7b78458576-8kp8s 1/1 Running 0 37s frontend-7b78458576-gg86q 1/1 Running 0 37s frontend-7b78458576-hz87g 1/1 Running 0 37s
The manifest file specifies the environment variable
When you provide the configuration to the guestbook web frontend application,
the frontend application uses the hostnames
to performs a DNS lookup. The DNS lookup finds the IP addresses of the
respective Services you created in the previous steps. This
concept is called DNS service discovery.
Expose frontend on an external IP address
redis-leader Services that you created in the previous steps
are only accessible within the GKE cluster, because the default type for a Service
ClusterIP provides a single IP address for the set of Pods that the Service
is pointing to. This IP address is accessible only within the cluster.
However, you need the guestbook web frontend Service to be externally visible.
That is, you want a client to be able to request the Service from outside the
GKE cluster. To accomplish this request,
you can specify
type: LoadBalancer or
in the Service configuration depending on your requirements. In this example,
type: LoadBalancer. The
frontend-service.yaml manifest file
specifying this configuration looks like this:
apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: type: LoadBalancer ports: # the port that this service should serve on - port: 80 selector: app: guestbook tier: frontend
frontend Service is created, GKE creates a load
balancer and an external IP address. Note that these resources
are subject to billing. The port declaration under the
ports section specifies
port: 80 and the
targetPort is not specified. When
you omit the
targetPort property, it defaults to the value of the
field. In this case, this Service routes external traffic on port 80 to the
port 80 of the containers in the
To create the Service, run the following command:
kubectl apply -f frontend-service.yaml
Visiting the guestbook website
To access the guestbook Service, find the external IP of the Service that you set up by running the command:
kubectl get service frontendOutput:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE frontend 10.51.242.136 188.8.131.52 80:32372/TCP 1m
Copy the IP address from the EXTERNAL-IP column, and load the page in your browser:
Try adding some guestbook entries by typing in a message, and clicking Submit. The message you typed appears in the frontend. This message indicates that data is successfully added to Redis through the Services you created earlier.
Scaling up the web frontend
Suppose your guestbook app has been running for a while, and it gets a sudden burst of publicity. You decide it would be a good idea to add more web servers to your frontend. You can do this easily, as your servers are defined as a service that uses a Deployment controller.
Scale up the number of your
frontend Pods to five by running:
kubectl scale deployment frontend --replicas=5
The configuration for the Deployment is updated to specify that there should be five replicas running now. The Deployment adjusts the number of Pods it is running to match the configuration. To verify the number of replicas, run the following command:
kubectl get podsOutput:
NAME READY STATUS RESTARTS AGE frontend-88237173-3s3sc 1/1 Running 0 1s frontend-88237173-twgvn 1/1 Running 0 1s frontend-88237173-5p257 1/1 Running 0 23m frontend-88237173-84036 1/1 Running 0 23m frontend-88237173-j3rvr 1/1 Running 0 23m redis-leader-343230949-qfvrq 1/1 Running 0 54m redis-follower-132015689-dp23k 1/1 Running 0 37m redis-follower-132015689-xq9v0 1/1 Running 0 37m
You can scale down the number of frontend Pods using the same command.
To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.
Delete the Service: This step deallocates the Cloud Load Balancer created for the
kubectl delete service frontend
Wait until the Load Balancer provisioned for the
frontendService is deleted: The load balancer is deleted asynchronously in the background when you run
kubectl delete. Wait until the load balancer is deleted by watching the output of the following command:
gcloud compute forwarding-rules list
Delete the GKE cluster: This step deletes the resources that make up the GKE cluster, such as the compute instances, disks, and network resources.
gcloud container clusters delete guestbook