Exposing Applications using Services

This page explains how to expose your application to external traffic. To learn about how Services work, refer to Service.

Overview

After you deploy an application to your cluster using a Deployment or StatefulSet object, its Pods are automatically assigned Pod IP addresses, which can be used to communicate within the cluster. To enable communication beyond the cluster, you need to expose the application to external traffic.

Pods be grouped into Services. Services target Pods using label selectors, which are defined in Pod specifications. When users access the Service, they are automatically load-balanced to a Pod that serves your application.

Before you begin

To prepare for this task, perform the following steps:

  • Ensure that you have enabled the Google Kubernetes Engine API.
  • Enable Google Kubernetes Engine API
  • Ensure that you have installed the Cloud SDK.
  • Set your default project ID:
    gcloud config set project [PROJECT_ID]
  • If you are working with zonal clusters, set your default compute zone:
    gcloud config set compute/zone [COMPUTE_ZONE]
  • If you are working with regional clusters, set your default compute region:
    gcloud config set compute/region [COMPUTE_REGION]
  • Update gcloud to the latest version:
    gcloud components update

Configuring a Workload to Accept External Traffic

The following sections explain how to configure a workload to accept external traffic before exposing the workload.

Exposing a Port for Your Pods

For a workload's Pods to accept traffic, you need to expose a port for the Pods to use. When you deploy a workload, such as a Deployment, you must specify a port to expose.

For example, suppose that you have created the following Deployment, my-app:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-app
  labels:
    run: my-app
spec:
  replicas: 3
  selector:
    matchLabels:
      run: my-app
  template:
    metadata:
      labels:
        run: my-app
    spec:
      containers:
      - image: gcr.io/google-samples/hello-app:1.0
        name: my-app
        ports:
        - containerPort: 8080

my-app is labelled run: my-app, runs on three replicated Pods, and opens TCP port 8080 on each Pod.

Inspecting the Exposed Pods

After deploying a workload, you should inspect its Pods to ensure their IP addresses are provisioned.

When you run kubectl get pods -o wide, you see the following output, which includes each Pod's IP address:

NAME                        READY     STATUS    RESTARTS   AGE       IP            NODE
my-app-3800858182-jr4a2   1/1       Running   0          13s       10.244.3.4    ...
my-app-3800858182-kna2y   1/1       Running   0          13s       10.244.2.5    ...

If you connect to any node in the cluster, you can interact with Pods using their IP addresses. Because the Pods' containers do not use the exposed port, you can run multiple Pods on the same node without an overlap.

Creating a Service

You can create a Service in the following ways:

  • Exposing an existing Pod or Deployment using kubectl expose.
  • Writing a Service manifest, then using kubectl apply to deploy the resource.
  • Using Google Cloud Platform Console.

kubectl expose

When you run kubectl expose, Google Kubernetes Engine creates a new Service that automatically targets the workload you specify.

For example, consider the example Deployment, my-app, from the preceding section. The following command exposes my-app. The command creates a LoadBalancer type Service named my-service:

kubectl expose deployment my-app --type=LoadBalancer --name=my-service

When you run kubectl get service my-app -o=yaml, the following Service manifest is returned:

apiVersion: v1
kind: Service
metadata:
  labels:
    run: my-app
  name: my-service
spec:
  clusterIP: 12.34.567.890
  externalTrafficPolicy: Cluster
  ports:
  - nodePort: 32752
    port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    run: my-app
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: 12.34.567.890

Since you didn't refer to a port in kubectl expose, the Service uses the Deployment's open port, 8080.

kubectl apply

The following is an example of a Service manifest, config.yaml. This Service targets a set of Pods that expose port 8080 and are labelled run: my-app:

    kind: Service
    apiVersion: v1
    metadata:
      name: my-service # Service name
    spec:
      type: LoadBalancer
      selector:
        run: my-app # Label selector. The Service targets Pods that use this label
      ports:
        port: 80 # Port used to access the Service from within the cluster.
        targetPort: 8080 # Port opened by targeted Pods

To create this Service, run the following command:

kubectl apply -f config.yaml

Console

To expose a workload, perform the following steps:

  1. Visit the Google Kubernetes Engine Workloads menu in GCP Console.

    Visit the Workloads menu

  2. Select the desired workload.

  3. Click Actions, then click Expose.
  4. From New port mapping, fill the Port field with the desired port and the Target port field with 80 (the container port).
  5. From the Service type drop-down menu, select Load balancer.
  6. Click Expose.

Inspecting the Service

kubectl

To inspect the Service, run the following command:

kubectl describe service my-nginx

Console

To inspect the Service, perform the following steps:

  1. Visit the Google Kubernetes Engine Services menu in GCP Console.

    Visit the Workloads menu

  2. Select the desired Service.

Whitelisting Access to Load Balancers

When you create a LoadBalancer Service, you can specify the IP ranges that are allowed to access the load balancer in the loadBalancerSourceRanges field of your Service's specification. Currently, you can only specify this field when you write a Service manifest file or by editing an existing Service.

To learn more, refer to Configure Your Cloud Provider's Firewalls in the Kubernetes documentation.

What's next

Was this page helpful? Let us know how we did:

Send feedback about...

Kubernetes Engine