Exposing an Application to External Traffic

This page explains how to expose your application to external traffic.


After you deploy an application to your cluster using a Deployment or StatefulSet object, its Pods are automatically assigned Pod IP addresses, which can be used to communicate directly with the Pods within the cluster. To enable communication beyond the cluster, you need to expose the application to external traffic.

To expose your application, you must expose the Pods to the cluster by opening a port. Then, you must create a Kubernetes Service resource of type: LoadBalancer.

Pods can combined as group to form a Service. Services target Pods using labels, which are defined in the object's Pod specifications. When users access the Service, they are automatically load-balanced to a random Pod that serves your application.

Before you begin

To prepare for this task, perform the following steps:

  • Ensure that you have installed the Cloud SDK.
  • Set your default project ID:
    gcloud config set project [PROJECT_ID]
  • Set your default compute zone:
    gcloud config set compute/zone [COMPUTE_ZONE]
  • Update all gcloud commands to the latest version:
    gcloud components update

Configuring a workload to accept external traffic

The following sections explain how to configure a workload to accept external traffic before exposing the workload.

Expose a port for your Pods

For an workload's Pods to accept traffic, you need to expose a port for the Pods to use. When you create a new controller object, such as a Deployment, you must specify a port to expose. Follow the steps below to learn how to open a port at creation time using kubectl run.

kubectl run

The following kubectl run command creates a two-replica Deployment called my-nginx. The Deployment runs the Docker Hub nginx image, labels it with run: nginx-backend, and exposes port 80:

kubectl run my-nginx --replicas=2 --labels="run=nginx-backend" \
--image=nginx --port=80

Inspecting the exposed Pods

After deploying a workload, you should inspect its Pods to ensure their IP addresses are provisioned.

When you run kubectl get pods -o wide, you see the following output, which includes each Pod's IP address:

NAME                        READY     STATUS    RESTARTS   AGE       IP            NODE
my-nginx-3800858182-jr4a2   1/1       Running   0          13s    ...
my-nginx-3800858182-kna2y   1/1       Running   0          13s    ...

If you connect to any node in the cluster, you can interact with Pods using their IP addresses. Because the Pods' containers do not use the exposed port, you can run multiple Pods on the same node without an overlap.

Exposing an application to external traffic

After you have exposed a port for your application, you can expose the application to external traffic using GCP Console or kubectl.


To expose a workload using GCP Console, perform the following steps:

  1. Visit the Kubernetes Engine Workloads menu in GCP Console.

    Visit the Workloads menu

  2. Select the desired workload from the menu.

  3. Click Actions, then click Expose.
  4. From New port mapping, fill the Port field with the desired port and the Target port field with 80 (the container port).
  5. From the Service type drop-down menu, select Load balancer.
  6. Click Expose.


kubectl expose creates a Service object which targets the application's Pods and routes traffic to them.

For example, to expose the my-nginx Deployment, run the following command:

kubectl expose deployment my-nginx --type="LoadBalancer" --name=my-nginx \
--port=8080 --target-port=80

This creates a Service named my-nginx which exposes the targeted Pods externally on port 8080 and load balances their traffic.

To see the Service, run:

kubectl get service my-nginx

The output is similar to the following:

NAME          CLUSTER-IP     EXTERNAL-IP      PORT(S)          AGE
my-nginx    XXX.XXX.XXX.XX   8080:30915/TCP   2h
kubernetes                443/TCP          4h

The EXTERNAL-IP field contains the IP address by which external traffic can reach the Service using port 8080.

What's next

Was this page helpful? Let us know how we did:

Send feedback about...

Kubernetes Engine