The tutorial shows how to set up the guestbook web service on an external IP with a load balancer, and how to run a Redis cluster with a single master (leader) and multiple replicas (followers).
The example highlights a number of important GKE concepts:
- Declarative configuration using YAML manifest files
- Deployments, which are Kubernetes resources that determine the configuration for a set of replicated Pods
- Services to create internal and external load balancers for a set of Pods
Objectives
To deploy and run the guestbook application on GKE:- Set up the Redis leader
- Set up two Redis followers
- Set up the guestbook web frontend
- Visit the guestbook website
- Scale up the guestbook web frontend
The following diagram shows you an overview of the cluster architecture you create by completing these objectives:
Costs
In this document, you use the following billable components of Google Cloud:
To generate a cost estimate based on your projected usage,
use the pricing calculator.
When you finish the tasks that are described in this document, you can avoid continued billing by deleting the resources that you created. For more information, see Clean up.
Before you begin
Take the following steps to enable the Kubernetes Engine API:- Visit the Kubernetes Engine page in the Google Cloud console.
- Create or select a project.
- Wait for the API and related services to be enabled. This can take several minutes.
-
Make sure that billing is enabled for your Google Cloud project.
Install the following command-line tools used in this tutorial:
-
gcloud
is used to create and delete Kubernetes Engine clusters.gcloud
is included in thegcloud
CLI. -
kubectl
is used to manage Kubernetes, the cluster orchestration system used by Kubernetes Engine. You can installkubectl
usinggcloud
:gcloud components install kubectl
Set defaults for the Google Cloud CLI
To save time typing your project ID and Compute Engine zone or region options in the gcloud CLI, you can set the defaults:gcloud config set project project-id
Depending on the mode of operation that you choose to use in GKE, you then specify a default compute zone or region. If you use the Standard mode, your cluster is zonal (for this tutorial), so set your default compute zone. If you use the Autopilot mode, your cluster is regional, so set your default compute region.
Autopilot
Run the following command, replacing compute-region
with
your compute region, such as us-west1
:
gcloud config set compute/region compute-region
Standard
Run the following command, replacing compute-zone
with your compute zone,
such as us-west1-a
:
gcloud config set compute/zone compute-zone
Create a GKE cluster
The first step is to create a GKE cluster on which you'll run the guestbook application and the Redis service.
Create a GKE cluster named guestbook
:
Autopilot
gcloud container clusters create-auto guestbook
Standard
gcloud container clusters create guestbook --num-nodes=4
You can list the clusters running in your project using the following commands:
gcloud container clusters list gcloud container clusters describe guestbook
Setting up the Redis leader
The guestbook application uses Redis to store its data. The application writes its data to a Redis leader instance and reads data from multiple Redis follower instances. The first step is to deploy a Redis leader.
First, clone the sample manifests:
git clone https://github.com/GoogleCloudPlatform/kubernetes-engine-samples cd kubernetes-engine-samples/guestbook
Use the manifest file named redis-leader-deployment
to deploy the
Redis leader. This manifest file specifies a Kubernetes Deployment that runs a
single replica Redis leader Pod:
Run the following command to deploy the Redis leader:
kubectl apply -f redis-leader-deployment.yaml
Verify that the Redis leader Pod is running:
kubectl get podsOutput:
NAME READY STATUS RESTARTS AGE redis-leader-343230949-qfvrq 1/1 Running 0 43s
Run the following command to look at the logs from the Redis leader Pod:
kubectl logs deployment/redis-leader
Output:
1:M 24 Jun 2020 14:48:20.917 * Ready to accept connections
Create the Redis leader service
The guestbook application needs to communicate with the Redis leader to write its data. You can create a Service to proxy the traffic to the Redis leader Pod.
A Service is a Kubernetes abstraction which defines a logical set of Pods and a policy to enable access to the Pods. The Service is effectively a named load balancer that proxies traffic to one or more Pods. When you set up a Service, you describe which Pods to proxy based on Pod labels.
View the redis-leader-service.yaml
manifest file describing a
Service resource for the Redis leader:
This manifest file creates a Service named redis-leader
with a set of label
selectors. These labels match the set of labels that are deployed in the
previous step. Therefore, this Service routes the network traffic to the Redis
leader Pod created in a previous step.
The ports
section of the manifest declares a single port mapping. In this
case, the Service routes the traffic on port: 6379
to the targetPort:
6379
of the containers that match the specified selector
labels. Note that
the containerPort
used in the Deployment must match the targetPort
to route
traffic to the Deployment.
Start the Redis leader Service by running:
kubectl apply -f redis-leader-service.yaml
Verify that the Service is created:
kubectl get serviceOutput:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.51.240.1 <none> 443/TCP 42s redis-leader 10.51.242.233 <none> 6379/TCP 12s
Setting up Redis followers
Although the Redis leader is a single Pod, you can make it highly available and meet traffic demands by adding a few Redis followers, or replicas.
View the redis-follower-deployment.yaml
manifest file describing a
Deployment for the Redis follower Pods:
To create the Redis follower Deployment, run:
kubectl apply -f redis-follower-deployment.yaml
Verify that the two Redis follower replicas are running by querying the list of Pods:
kubectl get podsOutput:
NAME READY STATUS RESTARTS AGE redis-follower-76588f55b7-bnsq6 1/1 Running 0 27s redis-follower-76588f55b7-qvtws 1/1 Running 0 27s redis-leader-dd446dc55-kl7nl 1/1 Running 0 119s
Get the Pod logs for one of the Redis followers.
kubectl logs deployment/redis-followerOutput:
1:M 24 Jun 2020 14:50:43.617 * Background saving terminated with success 1:M 24 Jun 2020 14:50:43.617 * Synchronization with replica 10.12.3.4:6379 succeeded
Create the Redis follower service
The guestbook application needs to communicate with the Redis followers to read data. To make the Redis followers discoverable, you must set up another Service.
The redis-follower-service.yaml
defines the Service configuration for the Redis
followers:
This file defines a Service named redis-follower
running on port 6379. Note that
the selector
field of the Service matches the Redis follower Pods created in the
previous step.
Create the redis-follower
Service by running:
kubectl apply -f redis-follower-service.yaml
Verify that the Service is created:
kubectl get serviceOutput:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.51.240.1 <none> 443/TCP 1m redis-leader 10.51.242.233 <none> 6379/TCP 49s redis-follower 10.51.247.238 <none> 6379/TCP 3s
Setting up the guestbook web frontend
Now that you have the Redis storage of your guestbook up and running, start the guestbook web servers. Like the Redis followers, the frontend is deployed using a Kubernetes Deployment.
The guestbook app uses a PHP frontend. It is configured to communicate with either the Redis follower or leader Services, depending on whether the request is a read or a write. The frontend exposes a JSON interface, and serves a jQuery-Ajax-based UX.
View the frontend-deployment.yaml
manifest file describing the
Deployment for the guestbook web server:
To create the guestbook web frontend Deployment, run:
kubectl apply -f frontend-deployment.yaml
Verify that the three replicas are running by querying the list of the labels that identify the web frontend:
kubectl get pods -l app=guestbook -l tier=frontendOutput:
NAME READY STATUS RESTARTS AGE frontend-7b78458576-8kp8s 1/1 Running 0 37s frontend-7b78458576-gg86q 1/1 Running 0 37s frontend-7b78458576-hz87g 1/1 Running 0 37s
The manifest file specifies the environment variable GET_HOSTS_FROM=dns
.
When you provide the configuration to the guestbook web frontend application,
the frontend application uses the hostnames redis-follower
and redis-leader
to performs a DNS lookup. The DNS lookup finds the IP addresses of the
respective Services you created in the previous steps. This
concept is called DNS service discovery.
Expose frontend on an external IP address
The redis-follower
and redis-leader
Services that you created in the previous steps
are only accessible within the GKE cluster, because the default type for a Service
is ClusterIP
.
ClusterIP
provides a single IP address for the set of Pods that the Service
is pointing to. This IP address is accessible only within the cluster.
However, you need the guestbook web frontend Service to be externally visible.
That is, you want a client to be able to request the Service from outside the
GKE cluster. To accomplish this request,
you can specify type: LoadBalancer
or type: NodePort
in the Service configuration depending on your requirements. In this example,
you use type: LoadBalancer
. The frontend-service.yaml
manifest file
specifying this configuration looks like this:
When the frontend
Service is created, GKE creates a load
balancer and an external IP address. Note that these resources
are subject to billing. The port declaration under the
ports
section specifies port: 80
and the targetPort
is not specified. When
you omit the targetPort
property, it defaults to the value of the port
field. In this case, this Service routes external traffic on port 80 to the
port 80 of the containers in the frontend
Deployment.
To create the Service, run the following command:
kubectl apply -f frontend-service.yaml
Visiting the guestbook website
To access the guestbook Service, find the external IP of the Service that you set up by running the command:
kubectl get service frontendOutput:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE frontend 10.51.242.136 109.197.92.229 80:32372/TCP 1m
Copy the IP address from the EXTERNAL-IP column, and load the page in your browser:
Try adding some guestbook entries by typing in a message, and clicking Submit. The message you typed appears in the frontend. This message indicates that data is successfully added to Redis through the Services you created earlier.
Scaling up the web frontend
Suppose your guestbook app has been running for a while, and it gets a sudden burst of publicity. You decide it would be a good idea to add more web servers to your frontend. You can do this easily, as your servers are defined as a service that uses a Deployment controller.
Scale up the number of your frontend
Pods to five by running:
kubectl scale deployment frontend --replicas=5
Output:
deployment.extensions/frontend scaled
The configuration for the Deployment is updated to specify that there should be five replicas running now. The Deployment adjusts the number of Pods it is running to match the configuration. To verify the number of replicas, run the following command:
kubectl get podsOutput:
NAME READY STATUS RESTARTS AGE frontend-88237173-3s3sc 1/1 Running 0 1s frontend-88237173-twgvn 1/1 Running 0 1s frontend-88237173-5p257 1/1 Running 0 23m frontend-88237173-84036 1/1 Running 0 23m frontend-88237173-j3rvr 1/1 Running 0 23m redis-leader-343230949-qfvrq 1/1 Running 0 54m redis-follower-132015689-dp23k 1/1 Running 0 37m redis-follower-132015689-xq9v0 1/1 Running 0 37m
You can scale down the number of frontend Pods using the same command.
Clean up
To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.
Delete the Service: This step deallocates the Cloud Load Balancer created for the
frontend
Service:kubectl delete service frontend
Wait until the Load Balancer provisioned for the
frontend
Service is deleted: The load balancer is deleted asynchronously in the background when you runkubectl delete
. Wait until the load balancer is deleted by watching the output of the following command:gcloud compute forwarding-rules list
Delete the GKE cluster: This step deletes the resources that make up the GKE cluster, such as the compute instances, disks, and network resources.
gcloud container clusters delete guestbook
What's next
- Learn how to configure automatic pod autoscaling for your Deployments using Horizontal Pod Autoscaling.
- Learn how to store persistent data in your application through the MySQL and WordPress tutorial.
Learn how to map a domain name to a GKE Service.
Explore other Kubernetes Engine tutorials.