This quickstart assumes a basic understanding of Kubernetes.
You can follow the steps on this page or try this quickstart as a Google Cloud Training lab.
Before you begin
Take the following steps to enable the Kubernetes Engine API:- Visit the Kubernetes Engine page in the Google Cloud Console.
- Create or select a project.
- Wait for the API and related services to be enabled. This can take several minutes.
-
Make sure that billing is enabled for your Cloud project. Learn how to confirm that billing is enabled for your project.
Quota requirements
To complete this quickstart, you need available quota for:
- 1 Compute Engine CPU in your cluster's region.
- 1 In-use IP address.
To check your available quota, use the Cloud Console.
Choosing a shell
To complete this quickstart, you can use either Cloud Shell or your local shell.
Cloud Shell is a shell environment for managing resources hosted
on Google Cloud. Cloud Shell comes preinstalled with the
gcloud
command-line tool and
kubectl command-line tool. The gcloud
tool provides the primary
command-line interface for Google Cloud, and
kubectl
provides the primary
command-line interface for running commands against
Kubernetes clusters.
If you prefer using your local shell, you must install the gcloud
tool
and kubectl
tool in your environment.
Cloud Shell
To launch Cloud Shell, perform the following steps:
Go to Google Cloud Console.
From the upper-right corner of the console, click the Activate Cloud Shell button:
A Cloud Shell session opens inside a frame lower on the console.
You use this shell to run gcloud
and kubectl
commands.
Local shell
To install gcloud
and kubectl
, perform the following steps:
- Install the Cloud SDK, which includes the
gcloud
command-line tool. After installing Cloud SDK, install the
kubectl
command-line tool by running the following command:gcloud components install kubectl
Configuring default settings for the gcloud
tool
Use the gcloud
tool to configure two default
settings: your default
project
and
compute zone.
Your project has a project ID, which is its unique identifier. When you first create a project, you can use the automatically generated project ID or you can create your own.
Your
compute zone
is a location in the region where your clusters
and their resources live. For example, us-west1-a
is a zone in the us-west
region.
Configuring these default settings makes it easier to run gcloud
commands,
because gcloud
requires that you specify the project and compute zone in which
you want to work. You can also specify these settings or override default
settings with flags, such as --project
, --zone
, and
--cluster
, in your gcloud
commands.
When you create GKE resources after configuring your default project and compute zone, the resources are automatically created in that project and zone.
Setting a default project
Run the following command, replacing project-id
with
your project ID:
gcloud config set project project-id
Setting a default compute zone or region
Depending on the mode of operation that you choose to use in GKE, you then specify a default zone or region. If you use the Standard mode, your cluster is zonal (for this tutorial), so set your default compute zone. If you use the Autopilot mode, your cluster is regional, so set your default compute region.
Standard
Run the following command, replacing compute-zone
with
your compute zone, such as us-west1-a
:
gcloud config set compute/zone compute-zone
Autopilot
Run the following command, replacing compute-region
with
your compute region, such as us-west1
:
gcloud config set compute/region compute-region
Creating a GKE cluster
A cluster consists of at least one cluster control plane machine and multiple worker machines called nodes. Nodes are Compute Engine virtual machine (VM) instances that run the Kubernetes processes necessary to make them part of the cluster. You deploy applications to clusters, and the applications run on the nodes.
Standard
The following command creates a one-node cluster. Replace
cluster-name
with the name of your cluster:
gcloud container clusters create cluster-name --num-nodes=1
Autopilot
The following command creates an Autopilot cluster. Replace
cluster-name
with the name of your cluster:
gcloud container clusters create-auto cluster-name
Get authentication credentials for the cluster
After creating your cluster, you need to get authentication credentials to interact with the cluster:
gcloud container clusters get-credentials cluster-name
This command configures kubectl
to use the cluster you created.
Deploying an application to the cluster
Now that you have created a cluster, you can deploy a
containerized application
to it. For this quickstart, you can deploy our example web
application, hello-app
.
GKE uses Kubernetes objects to create and manage your cluster's resources. Kubernetes provides the Deployment object for deploying stateless applications like web servers. Service objects define rules and load balancing for accessing your application from the internet.
Creating the Deployment
To run hello-app
in your cluster, run the following command:
kubectl create deployment hello-server --image=gcr.io/google-samples/hello-app:1.0
This Kubernetes command,
kubectl create deployment
,
creates a Deployment named hello-server
. The Deployment's
Pod runs the hello-app
container image.
In this command:
--image
specifies a container image to deploy. In this case, the command pulls the example image from a Container Registry bucket,gcr.io/google-samples/hello-app
.:1.0
indicates the specific image version to pull. If you don't specify a version, the latest version is used.
Exposing the Deployment
After deploying the application, you need to expose it to the internet so that users can access it. You can expose your application by creating a Service, a Kubernetes resource that exposes your application to external traffic.
To expose your application, run the following
kubectl expose
command:
kubectl expose deployment hello-server --type LoadBalancer \ --port 80 --target-port 8080
Passing in the --type LoadBalancer
flag creates a Compute Engine
load balancer for your container. The --port
flag initializes public
port 80 to the internet and the --target-port
flag routes the traffic to
port 8080 of the application.
Load balancers are billed per Compute Engine's load balancer pricing.
Inspecting and viewing the application
Inspect the running Pods by using
kubectl get pods
:kubectl get pods
You should see one
hello-server
Pod running on your cluster.Inspect the
hello-server
Service by usingkubectl get service
:kubectl get service hello-server
From this command's output, copy the Service's external IP address from the
EXTERNAL-IP
column.View the application from your web browser by using the external IP address with the exposed port:
http://external-ip/
You have just deployed a containerized web application to GKE.
Clean up
To avoid incurring charges to your Google Cloud account for the resources used in this quickstart, follow these steps.
Delete the application's Service by running
kubectl delete
:kubectl delete service hello-server
This command deletes the Compute Engine load balancer that you created when you exposed the Deployment.
Delete your cluster by running
gcloud container clusters delete
:gcloud container clusters delete cluster-name
Optional: hello-app
code review
hello-app
is a simple web server application that consists of two files:
main.go
and a Dockerfile
.
hello-app
is packaged as a
Docker
container image. Container images are stored in any Docker image registry,
such as Container Registry. We host hello-app
in a Container Registry
bucket named gcr.io/google-samples/hello-app
.
main.go
main.go
is a web server implementation written in the
Go programming language.
The server responds to any HTTP request with a "Hello, world!" message.
Dockerfile
Dockerfile
describes the image you want Docker to build, including all of its
resources and dependencies, and specifies which network port the app should
expose. For more information about how this file works, see
Dockerfile reference
in the Docker documentation.
What's next
- Learn more about creating clusters.
- Learn more about Kubernetes.
- Read the
kubectl
reference documentation. - Learn how to package, host, and deploy a simple web server application.
- Deploy a Guestbook application with Redis and PHP.
- Deploy a stateful WordPress application with persistent storage and MySQL.