The tutorial shows how to set up the guestbook web service on an external IP with a load balancer and how to run a Redis cluster with a single master and multiple workers.
The example highlights a number of important GKE concepts:
ObjectivesTo deploy and run the guestbook application on GKE, you must:
- Set up a Redis master
- Set up Redis workers
- Set up the guestbook web frontend
- Visit the guestbook website
- Scale up the guestbook web frontend
Before you beginTake the following steps to enable the Kubernetes Engine API:
- Visit the Kubernetes Engine page in the Google Cloud Console.
- Create or select a project.
- Wait for the API and related services to be enabled. This can take several minutes.
Make sure that billing is enabled for your Google Cloud project. Learn how to confirm billing is enabled for your project.
Install the following command-line tools used in this tutorial:
gcloudis used to create and delete Kubernetes Engine clusters.
gcloudis included in the Google Cloud SDK.
kubectlis used to manage Kubernetes, the cluster orchestration system used by Kubernetes Engine. You can install
gcloud components install kubectl
Set defaults for the
To save time typing your project ID
and Compute Engine zone options in the
gcloud command-line tool
gcloudcommand-line tool, you can set the defaults:
gcloud config set project [PROJECT_ID] gcloud config set compute/zone [COMPUTE_ENGINE_ZONE]
Download the configuration files
Visit the Kubernetes Examples GitHub repository to download the configuration files used in this tutorial:
frontend-deployment.yaml: the guestbook frontend deployment
frontend-service.yaml: the guestbook frontend service
redis-master-deployment.yaml: Redis master deployment
redis-master-service.yaml: Redis master service
redis-slave-deployment.yaml: Redis workers deployment
redis-slave-service.yaml: Redis workers service
Create a GKE cluster
The first step is to create a GKE cluster on which you'll run the guestbook application and the Redis service.
Create a container cluster named
guestbook with 2 nodes:
gcloud container clusters create guestbook --num-nodes=2
You can list clusters for in your project or get the details for a single cluster using the following commands:
gcloud container clusters list gcloud container clusters describe guestbook
Step 1: Set up a Redis master
The guestbook application uses Redis to store its data. It writes its data to a Redis master instance and reads data from multiple Redis worker (slave) instances. The first step is to deploy a Redis master.
Use the manifest file named
redis-master-deployment to deploy the
Redis master. This manifest file specifies a Deployment controller that runs a
single replica Redis master Pod:
apiVersion: apps/v1 # for k8s versions before 1.9.0 use apps/v1beta2 and before 1.8.0 use extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: selector: matchLabels: app: redis role: master tier: backend replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: k8s.gcr.io/redis:e2e # or just image: redis resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379
Run the following command to deploy the Redis master:
kubectl create -f redis-master-deployment.yaml
Verify that the Redis master Pod is running
kubectl get pods:
kubectl get podsOutput:
NAME READY STATUS RESTARTS AGE redis-master-343230949-qfvrq 1/1 Running 0 43s
Copy the Pod name from the output of the previous command and run the following command to take a look at the logs from the Redis master Pod:
kubectl logs -f [POD_NAME]
Create redis-master service
The guestbook application needs to communicate to Redis master to write its data. You need to create a Service to proxy the traffic to the Redis master Pod.
Service is a Kubernetes abstraction which defines a logical set of Pods and a policy by which to access them. It is effectively a named load balancer that proxies traffic to one or more Pods. When you set up a Service, you tell it the Pods to proxy based on Pod labels.
Take a look at the
redis-master-service.yaml manifest file describing a
Service resource for the Redis master:
apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend
This manifest file creates a Service named
redis-master with a set of label
selectors. These labels match the set of labels that are deployed in the
previous step. Therefore, this Service routes the network traffic to the Redis
master Pod created in Step 1.
ports section of the manifest declares a single port mapping. In this
case, the Service will route the traffic on
port: 6379 to the
6379 of the containers that match the specified
selector labels. Note that
containerPort used in the Deployment must match the
targetPort to route
traffic to the Deployment.
Start up the Redis master's Service by running:
kubectl create -f redis-master-service.yaml
Verify that the service is created:
kubectl get serviceOutput:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.51.240.1 <none> 443/TCP 42s redis-master 10.51.242.233 <none> 6379/TCP 12s
Step 2: Set up Redis workers
Although the Redis master is a single pod, you can make it highly available and meet traffic demands by adding a few Redis worker replicas.
Take a look at the
redis-slave-deployment.yaml manifest file describing a
Deployment for the Redis worker pods:
apiVersion: apps/v1 # for k8s versions before 1.9.0 use apps/v1beta2 and before 1.8.0 use extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: selector: matchLabels: app: redis role: slave tier: backend replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google_samples/gb-redisslave:v1 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379
Deployment is responsible for achieving the configuration declared in the manifest file. For example, this manifest file defines two replicas for the Redis workers. If there are not any replicas running, Deployment would start the two replicas on your container cluster. If there are more than two replicas are running, it would kill some to meet the specified configuration.
In this case, the Deployment object specifies two replicas. To create the Redis worker Deployment, run:
kubectl create -f redis-slave-deployment.yaml
Verify that the two Redis worker replicas are running by querying the list of Pods:
kubectl get podsOutput:
NAME READY STATUS RESTARTS AGE redis-master-343230949-qfvrq 1/1 Running 0 17m redis-slave-132015689-dp23k 1/1 Running 0 2s redis-slave-132015689-xq9v0 1/1 Running 0 2s
Create redis-slave service
The guestbook application needs to communicate to Redis workers to read data. To make the Redis workers discoverable, you need to set up a Service. A Service provides transparent load balancing to a set of Pods.
redis-slave-service.yaml defines the Service configuration for the Redis
apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend
This file defines a Service named
redis-slave running on port 6379. Note that
selector field of the Service matches the Redis worker Pods created in the
redis-slave Service by running:
kubectl create -f redis-slave-service.yaml
Verify that the Service is created:
kubectl get serviceOutput:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.51.240.1 <none> 443/TCP 1m redis-master 10.51.242.233 <none> 6379/TCP 49s redis-slave 10.51.247.238 <none> 6379/TCP 3s
Step 3: Set up the guestbook web frontend
Now that you have the Redis storage of your guestbook up and running, start the guestbook web servers. Like the Redis workers, this is a replicated application managed by a Deployment.
This tutorial uses a simple PHP frontend. It is configured to talk to either the Redis worker or master Services, depending on whether the request is a read or a write. It exposes a simple JSON interface, and serves a jQuery-Ajax-based UX.
Take a look at the
frontend-deployment.yaml manifest file describing the
Deployment for the guestbook web server:
apiVersion: apps/v1 # for k8s versions before 1.9.0 use apps/v1beta2 and before 1.8.0 use extensions/v1beta1 kind: Deployment metadata: name: frontend spec: selector: matchLabels: app: guestbook tier: frontend replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80
To create the guestbook web frontend Deployment, run:
kubectl create -f frontend-deployment.yaml
Verify the three replicas are running by querying the list of the labels that identify the web frontend:
kubectl get pods -l app=guestbook -l tier=frontendOutput:
NAME READY STATUS RESTARTS AGE frontend-88237173-5p257 1/1 Running 0 40s frontend-88237173-84036 1/1 Running 0 40s frontend-88237173-j3rvr 1/1 Running 0 40s
The manifest file above specifies the environment variable
When guestbook web frontend application is provided this configuration, it uses
redis-master and performs a DNS lookup to find
IP addresses of the respective Services you created in the previous steps. This
concept is called DNS service discovery.
Expose frontend on an external IP address
redis-master Services you created in the previous steps
are only accessible within the container cluster, because the default type for a Service
ClusterIP provides a single IP address for the set of Pods the Service
is pointing to. This IP address is accessible only within the cluster.
However, you need the guestbook web frontend Service to be externally visible.
That is, you want a client to be able to request the Service from outside the
container cluster. To accomplish this, you can specify
type: LoadBalancer or
type: NodePort in the Service
configuration depending on your needs. In this example, you will use
type: LoadBalancer. The
frontend-service.yaml manifest file specifying this
configuration looks like this:
apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # comment or delete the following line if you want to use a LoadBalancer type: NodePort # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend
frontend Service is created, GKE creates a load
balancer and an external IP address. Note that these resources
are subject to billing. The port declaration under the
ports section specifies
port: 80 and the
targetPort is not specified. When
you omit the
targetPort property, it defaults to the value of the
field. In this case, this Service will route external traffic on port 80 to the
port 80 of the containers in the
To create the Service, first, uncomment the following line in the
Then, create the service by running:
kubectl create -f frontend-service.yaml
Step 4: Visit the guestbook website
To access the guestbook Service, you need to find the external IP of the Service you just set up by running the command:
kubectl get service frontendOutput:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE frontend 10.51.242.136 22.214.171.124 80:32372/TCP 1m
Copy the IP address in EXTERNAL-IP column, and load the page in your browser:
Congratulations! Try adding some guestbook entries.
Step 5: Scaling up the web frontend
Suppose your guestbook app has been running for a while, and it gets a sudden burst of publicity. You decide it would be a good idea to add more web servers to your frontend. You can do this easily, since your servers are defined as a service that uses a Deployment controller.
Scale up the number of your
frontend Pods to 5 by running:
kubectl scale deployment frontend --replicas=5
The configuration for the Deployment is updated to specify that there should be 5 replicas running now. The Deployment adjusts the number of Pods it is running to match that. To verify, run the following command:
kubectl get podsOutput:
NAME READY STATUS RESTARTS AGE frontend-88237173-3s3sc 1/1 Running 0 1s frontend-88237173-twgvn 1/1 Running 0 1s frontend-88237173-5p257 1/1 Running 0 23m frontend-88237173-84036 1/1 Running 0 23m frontend-88237173-j3rvr 1/1 Running 0 23m redis-master-343230949-qfvrq 1/1 Running 0 54m redis-slave-132015689-dp23k 1/1 Running 0 37m redis-slave-132015689-xq9v0 1/1 Running 0 37m
Once your site has fallen back into obscurity, you can ramp down the number of web server Pods in the same manner.
To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial:
Step 6: Cleanup
After completing this tutorial, follow these steps to remove the following resources to prevent unwanted charges incurring on your account:
Delete the Service: This step will deallocate the Cloud Load Balancer created for the
kubectl delete service frontend
Wait for the Load Balancer provisioned for the
frontendService to be deleted: The load balancer is deleted asynchronously in the background when you run
kubectl delete. Wait until the load balancer is deleted by watching the output of the following command:
gcloud compute forwarding-rules list
Delete the container cluster: This step will delete the resources that make up the container cluster, such as the compute instances, disks and network resources.
gcloud container clusters delete guestbook