This quickstart shows you how deploy a containerized application with Kubernetes Engine.

Before you begin

Take the following steps to enable the Google Kubernetes Engine API:
  1. Visit the Kubernetes Engine page in the Google Cloud Platform Console.
  2. Create or select a project.
  3. Wait for the API and related services to be enabled. This can take several minutes.
  4. Enable billing for your project.

    Enable billing

Choosing a shell

To complete this quickstart, you can use either Google Cloud Shell or your local shell.

Google Cloud Shell is a shell environment for managing resources hosted on Google Cloud Platform (GCP). Cloud Shell comes preinstalled with the gcloud and kubectl command-line tools. gcloud provides the primary command-line interface for GCP, and kubectl provides the command-line interface for running commands against Kubernetes clusters.

If you prefer use your local shell, you must install the gcloud and kubectl command-line tools in your environment.

Cloud Shell

To launch Cloud Shell, perform the following steps:

  1. Go to Google Cloud Platform Console.

    Google Cloud Platform Console

  2. From the top-right corner of the console, click the Activate Google Cloud Shell button:

A Cloud Shell session opens inside a frame at the bottom of the console. You use this shell to run gcloud and kubectl commands.

Local Shell

To install gcloud and kubectl, perform the following steps:

  1. Install the Google Cloud SDK, which includes the gcloud command-line tool.
  2. After installing Cloud SDK, install the kubectl command-line tool by running the following command:

    gcloud components install kubectl

Configuring default settings for gcloud

Before getting started, you should use gcloud to configure two default settings: your default project and compute zone.

Configuring these default settings makes it easier to run gcloud commands, since gcloud requires that you specify the project and compute zone in which you wish to work. You can also specify these settings or override default settings by passing operational flags, such as --project, --zone, and --cluster, to gcloud commands.

When you create Kubernetes Engine resources after configuring your default project and compute zone, the resources are automatically created in that project and zone.

Setting a default project

The project ID is your project's unique identifier. When you first create a project, you can use the automatically-generated project ID or you can create your own.

To set a default project, run the following command from Cloud Shell:

gcloud config set project [PROJECT_ID]

Replace [PROJECT_ID] with your project ID.

Setting a default compute zone

Your compute zone is an approximate regional location in which your clusters and their resources live. For example, us-west1-a is a zone in the us-west region.

To set a default compute zone, run the following command:

gcloud config set compute/zone [COMPUTE_ZONE]

where [COMPUTE_ZONE] is the desired geographical compute zone, such as us-west1-a.

Creating a Kubernetes Engine cluster

A cluster consists of at least one cluster master machine and multiple worker machines called nodes. Nodes are Compute Engine virtual machine (VM) instances that run the Kubernetes processes necessary to make them part of the cluster. You deploy applications to clusters, and the applications run on the nodes.

To create a cluster, run the following command:

gcloud container clusters create [CLUSTER_NAME]

where [CLUSTER_NAME] is the name you choose for the cluster.

Get authentication credentials for the cluster

After creating your cluster, you need to get authentication credentials to interact with the cluster.

To authenticate for the cluster, run the following command:

gcloud container clusters get-credentials [CLUSTER_NAME]

Deploying an application to the cluster

Now that you have created a cluster, you can deploy a containerized application to it. For this quickstart, you can deploy our example web application, hello-app.

Kubernetes Engine uses Kubernetes objects to create and manage your cluster's resources. Kubernetes provides the Deployment object for deploying stateless applications like web servers. Service objects define rules and load balancing for accessing your application from the Internet.

Creating the Deployment

To run hello-app in your cluster, run the following command:

kubectl run hello-server --image gcr.io/google-samples/hello-app:1.0 --port 8080

This Kubernetes command, kubectl run, creates a new Deployment named hello-server. The Deployment's Pod runs the hello-app image in its container.

In this command:

  • --image specifies a container image to deploy. In this case, the command pulls the example image from a Google Container Registry bucket, gcr.io/google-samples/hello-app. :1.0 indicates the specific image version to pull. If a version is not specified, the latest version is used.
  • --port specifies the port that the container exposes.

Exposing the Deployment

After deploying the application, you need to expose it to the Internet so that users can access it. You can expose your application by creating a Service, a Kubernetes resource that exposes your application to external traffic.

To expose your application, run the following kubectl expose command:

kubectl expose deployment hello-server --type "LoadBalancer"

Passing in the --type="LoadBalancer" flag creates a Compute Engine load balancer for your container. Load balancers are billed per Compute Engine's load balancer pricing.

Inspecting and viewing the application

  1. Inspect the hello-server Service by running kubectl get:

    kubectl get service hello-server

    From this command's output, copy the Service's external IP address from the EXTERNAL IP column.

  2. View the application from your web browser using the external IP address with the exposed port:


You have just deployed a containerized application to Kubernetes Engine!

Clean up

To avoid incurring charges to your Google Cloud Platform account for the resources used in this quickstart:

  1. Delete the application's Service by running kubectl delete:

    kubectl delete service hello-server
  2. Delete your cluster by running gcloud container clusters delete:

    gcloud container clusters delete [CLUSTER_NAME]

Optional: hello-app code review

hello-app is a simple web server application consisting of two files, main.go and a Dockerfile.

hello-app is packaged as Docker container image. Container images are stored in any Docker image registry, such as Google Container Registry. We host hello-app in a Container Registry bucket named gcr.io/google-samples/hello-app.


main.go is a web server implementation written in the Go programming language. The server responds to any HTTP request with a “Hello, world!” message.

package main

import (

func main() {
	port := "8080"
	if fromEnv := os.Getenv("PORT"); fromEnv != "" {
		port = fromEnv

	server := http.NewServeMux()
	server.HandleFunc("/", hello)
	log.Printf("Server listening on port %s", port)
	log.Fatal(http.ListenAndServe(":"+port, server))

func hello(w http.ResponseWriter, r *http.Request) {
	log.Printf("Serving request: %s", r.URL.Path)
	host, _ := os.Hostname()
	fmt.Fprintf(w, "Hello, world!\n")
	fmt.Fprintf(w, "Version: 1.0.0\n")
	fmt.Fprintf(w, "Hostname: %s\n", host)


Dockerfile describes the image you want Docker to build, including all of its resources and dependencies, and specifies which network port the app should expose. To learn more about how this file work, refer to Dockerfile reference in the Docker documentation.

FROM golang:1.8-alpine
ADD . /go/src/hello-app
RUN go install hello-app

FROM alpine:latest
COPY --from=0 /go/bin/hello-app .
CMD ["./hello-app"]

What's next

Send feedback about...

Kubernetes Engine Documentation