Exposing Applications using Services

This page shows how to create Kubernetes Services in a Google Kubernetes Engine cluster. For an explanation of the Service concept and a discussion of the various types of Services, see Service.

Introduction

The idea of a Service is to group a set of Pod endpoints into a single resource. You can configure various ways to access the grouping. By default, you get a stable cluster IP address that clients inside the cluster can use to contact Pods in the Service. A client sends a request to the stable IP address, and the request is routed to one of the Pods in the Service.

There are five types of Services:

  • ClusterIP (default)
  • NodePort
  • LoadBalancer
  • ExternalName
  • Headless

This topic has several exercises. In each exercise, you create a Deployment and expose its Pods by creating a Service. Then you send an HTTP request to the Service.

Before you begin

To prepare for this task, perform the following steps:

  • Ensure that you have enabled the Google Kubernetes Engine API.
  • Enable Google Kubernetes Engine API
  • Ensure that you have installed the Cloud SDK.
  • Set your default project ID:
    gcloud config set project [PROJECT_ID]
  • If you are working with zonal clusters, set your default compute zone:
    gcloud config set compute/zone [COMPUTE_ZONE]
  • If you are working with regional clusters, set your default compute region:
    gcloud config set compute/region [COMPUTE_REGION]
  • Update gcloud to the latest version:
    gcloud components update

Creating a Service of type ClusterIP

kubectl apply

Here is a manifest for a Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-deployment
spec:
  selector:
    matchLabels:
      app: metrics
      department: sales
  replicas: 3
  template:
    metadata:
      labels:
        app: metrics
        department: sales
    spec:
      containers:
      - name: hello
        image: "gcr.io/google-samples/hello-app:2.0"

Copy the manifest to a file named my-deployment.yaml, and create the Deployment:

kubectl apply -f my-deployment.yaml

Verify that three Pods are running:

kubectl get pods

The output shows three running Pods:

NAME                              READY     STATUS    RESTARTS   AGE
service-how-to-76699757f9-h4xk4   1/1       Running   0          4s
service-how-to-76699757f9-tjcfq   1/1       Running   0          4s
service-how-to-76699757f9-wt9d8   1/1       Running   0          4s

Here is a manifest for a Service of type ClusterIP:

apiVersion: v1
kind: Service
metadata:
  name: my-cip-service
spec:
  type: ClusterIP
  selector:
    app: metrics
    department: sales
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8080

The Service has a selector that specifies two labels:

  • app: metrics
  • department: sales

Each Pod in the Deployment that you created previously has those two labels. So the Pods in the Deployment will become members of this Service.

Copy the manifest to a file named my-cip-service.yaml, and create the Service:

kubectl apply -f my-cip-service.yaml

Wait a moment for Kubernetes to assign a stable internal address to the Service, and then view the Service:

kubectl get service my-cip-service --output yaml

The output shows a value for clusterIP.

spec:
  clusterIP: 10.59.241.241

Make a note of your clusterIP value for later.

Console

Create a Deployment

  1. Visit the Google Kubernetes Engine Workloads menu in GCP Console.

    Visit the Workloads menu

  2. Click Deploy.

  3. In the Container box, for Container image, enter gcr.io/google-samples/hello-app:2.0, and click Done.

  4. For Application name, enter my-deployment.

  5. Under Labels, create two labels. For one label, set Key to app, and set Value to metrics. For the other label, set Key to department, and set Value to sales.

  6. From the Cluster drop-down menu, select the desired cluster.

  7. Click Deploy.

  8. When your Deployment is ready, the Deployment details page opens, and you can see that your Deployment has one or more running Pods.

Create a Service to expose your Deployment

  1. In the Deployment details page, under Services, click Expose.

  2. In the New port mapping box, set Port to 80, and set Target port to 8080. Leave Protocol set to TCP. Click Done.

  3. From the Service type drop-down menu, select Cluster IP.

  4. For Service name, enter my-cip-service.

  5. Click Expose.

  6. When your Service is ready, the Service details page opens, and you can see details about your Service. In particular, you can see the Cluster IP value that Kubernetes assigned to your Service. This is the IP address that internal clients can use to call the Service. Make a note of the Cluster IP value for later.

Accessing your Service

List your running Pods:

kubectl get pods

In the output, copy one of the Pod names that begins with my-deployment.

NAME                               READY     STATUS    RESTARTS   AGE
my-deployment-6897d9577c-7z4fv     1/1       Running   0          5m

Get a shell into one of your running containers:

kubectl exec -it [YOUR_POD_NAME] -- sh

where [YOUR_POD_NAME] is the name of one of the Pods in my-deployment.

In your shell, install curl:

apk add --no-cache curl

In the container, make a request to your Service by using your cluster IP address and port 80. Notice that 80 is the value of the port field of your Service. This is the port that you use as a client of the Service.

curl [CLUSTER_IP]:80

where [CLUSTER_IP] is the value of clusterIP in your Service.

Your request is forwarded to one of the member Pods on TCP port 8080, which is the value of the targetPort field. Note that each of the Service's member Pods must have a container listening on port 8080.

The response shows the output of hello-app:

Hello, world!
Version: 2.0.0
Hostname: service-how-to-76699757f9-hsb5x

To exit the shell to your container, enter exit.

Creating a Service of type NodePort

kubectl apply

Here is a manifest for a Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-deployment-50000
spec:
  selector:
    matchLabels:
      app: metrics
      department: engineering
  replicas: 3
  template:
    metadata:
      labels:
        app: metrics
        department: engineering
    spec:
      containers:
      - name: hello
        image: "gcr.io/google-samples/hello-app:2.0"
        env:
        - name: "PORT"
          value: "50000"

Notice the env object in the manifest. The env object specifies that the PORT environment variable for the running container will have a value of 50000. The hello-app application listens on the port specified by the PORT environment variable. So in this exercise, you are telling the container to listen on port 50000.

Copy the manifest to a file named my-deployment-50000.yaml, and create the Deployment:

kubectl apply -f my-deployment-50000.yaml

Verify that three Pods are running:

kubectl get pods

Here is a manifest for a Service of type NodePort:

apiVersion: v1
kind: Service
metadata:
  name: my-np-service
spec:
  type: NodePort
  selector:
    app: metrics
    department: engineering
  ports:
  - protocol: TCP
    port: 80
    targetPort: 50000

Copy the manifest to a file named my-np-service.yaml, and create the Service:

kubectl apply -f my-np-service.yaml

View the Service:

kubectl get service my-np-service --output yaml

The output shows a nodePort value.

...
  spec:
    ...
    ports:
    - nodePort: 30876
      port: 80
      protocol: TCP
      targetPort: 50000
    selector:
      app: metrics
      department: engineering
    sessionAffinity: None
    type: NodePort
...

If the nodes in your cluster have external IP addresses, find the external IP address of one of your nodes:

kubectl get nodes --output wide

The output shows the external IP addresses of your nodes:

NAME          STATUS    ROLES     AGE    VERSION        EXTERNAL-IP
gke-svc-...   Ready     none      1h     v1.9.7-gke.6   203.0.113.1

Not all clusters have external IP addresses for nodes. For example, the nodes in private clusters do not have external IP addresses.

Create a firewall rule to allow TCP traffic on your node port:

gcloud compute firewall-rules create test-node-port --allow tcp:[NODE_PORT]

where: [NODE_PORT] is the value of the nodePort field of your Service.

Console

Create a Deployment

  1. Visit the Google Kubernetes Engine Workloads menu in GCP Console.

    Visit the Workloads menu

  2. Click Deploy.

  3. In the Container box, for Container image, enter gcr.io/google-samples/hello-app:2.0. Click Add environment variables. For Key, enter PORT, and for Value, enter 50000. Click Done.

  4. For Application name, enter my-deployment-50000.

  5. Under Labels, create two labels. For one label, set Key to app, and set Value to metrics. For the other label, set Key to department, and set Value to engineering.

  6. From the Cluster drop-down menu, select the desired cluster.

  7. Click Deploy.

  8. When your Deployment is ready, the Deployment details page opens, and you can see that your Deployment has one or more running Pods.

Create a Service to expose your Deployment

  1. In the Deployment details page, under Services, click Expose.

  2. In the New port mapping box, set Port to 80, and set Target port to 50000. Leave Protocol set to TCP. Click Done.

  3. From the Service type drop-down menu, select Node port.

  4. For Service name, enter my-np-service.

  5. Click Expose.

  6. When your Service is ready, the Service details page opens, and you can see details about your Service. In particular, under Ports, you can see the Node Port value that Kubernetes assigned to your Service. Make a note of the Node Port value for later.

Create a firewall rule for your node port

  1. Visit the Firewall rules menu in GCP Console.

    Visit the Firewall rules menu

  2. Click Create firewall rule.

  3. For Name, enter test-node-port.

  4. From the Targets drop-down menu, select All instances in the network.

  5. For Source IP ranges, enter 0.0.0.0/0.

  6. Under Specified protocols and port, select tcp, and enter your node port value.

  7. Click Create.

Find the external IP address of one of your cluster nodes.

  1. Visit the Google Kubernetes Engine menu in GCP Console.

    Visit the Google Kubernetes Engine menu

  2. Click the name of the cluster you are using for this exercise.

  3. Under Node Pools, click the name of an instance group. A list of nodes is displayed. Make a note of the external IP address for one of the nodes.

Access your Service

In your browser's address bar, enter [NODE_IP_ADDRESS]:[NODE_PORT].

where:

  • [NODE_IP_ADDRESS] is the external IP address of one of your nodes.
  • [NODE_PORT] is your node port value.

The response shows the output of hello-app:

Hello, world!
Version: 2.0.0
Hostname: service-how-to-50000-695955857d-q76pb

Creating a Service of type LoadBalancer

kubectl apply

Here is a manifest for a Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-deployment-50001
spec:
  selector:
    matchLabels:
      app: products
      department: sales
  replicas: 3
  template:
    metadata:
      labels:
        app: products
        department: sales
    spec:
      containers:
      - name: hello
        image: "gcr.io/google-samples/hello-app:2.0"
        env:
        - name: "PORT"
          value: "50001"

Notice that the containers in this Deployment will listen on port 50001.

Copy the manifest to a file named my-deployment-50001.yaml, and create the Deployment:

kubectl apply -f my-deployment-50001.yaml

Verify that three Pods are running:

kubectl get pods

Here is a manifest for a Service of type LoadBalancer:

apiVersion: v1
kind: Service
metadata:
  name: my-lb-service
spec:
  type: LoadBalancer
  selector:
    app: products
    department: sales
  ports:
  - protocol: TCP
    port: 60000
    targetPort: 50001

Copy the manifest to a file named my-lb-service.yaml, and create the Service:

kubectl apply -f my-lb-service.yaml

When you create a Service of type LoadBalancer, a Google Cloud controller wakes up and configures a network load balancer. Wait a minute for the controller to configure the network load balancer and generate a stable IP address.

View the Service:

kubectl get service my-lb-service --output yaml

The output shows a stable external IP address under loadBalancer:ingress:.

...
spec:
  ...
  ports:
  - ...
    port: 60000
    protocol: TCP
    targetPort: 50001
  selector:
    app: products
    department: sales
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: 203.0.113.10

Console

Create a Deployment

  1. Visit the Google Kubernetes Engine Workloads menu in GCP Console.

    Visit the Workloads menu

  2. Click Deploy.

  3. In the Container box, for Container image, enter gcr.io/google-samples/hello-app:2.0. Click Add environment variables. For Key, enter PORT, and for Value, enter 50001. Click Done

  4. For Application name, enter my-deployment-50001.

  5. Under Labels, create two labels. For one label, set Key to app, and set Value to products. For the other label, set Key to department, and set Value to sales.

  6. From the Cluster drop-down menu, select the desired cluster.

  7. Click Deploy.

  8. When your Deployment is ready, the Deployment details page opens, and you can see that your Deployment has one or more running Pods.

Create a Service to expose your Deployment

  1. In the Deployment details page, click Expose.

  2. In the New port mapping box, set Port to 60000, and set Target port to 50001. Leave Protocol set to TCP. Click Done.

  3. From the Service type drop-down menu, select Load Balancer.

  4. For Service name, enter my-lb-service.

  5. Click Expose.

  6. When your Service is ready, the Service details page opens, and you can see details about your Service. In particular, you can see the external IP address of the load balancer. Make a note of the load balancer's IP address for later.

Access your Service

Wait a few minutes fo GKE to configure the load balancer.

In your browser's address bar, enter [LOAD_BALANCER_ADDRESS]:60000.

where [LOAD_BALANCER_ADDRESS] is the external IP address of your load balancer.

The response shows the output of hello-app:

Hello, world!
Version: 2.0.0
Hostname: service-how-to-50001-644f8857c7-xxdwg

Notice that the value of port in a Service is arbitrary. The preceding example demonstrates this by using a port value of 60000.

Creating a Service of type ExternalName

A Service of type ExternalName provides an internal alias for an external DNS name. Internal clients make requests using the internal DNS name, and the requests are redirected to the external name.

Here is a manifest for a Service of type ExternalName:

apiVersion: v1
kind: Service
metadata:
  name: my-xn-service
spec:
  type: ExternalName
  externalName: example.com

In the preceding example, the DNS name is my-xn-service.default.svc.cluster.local. When an internal client makes a request to my-xn-service.default.svc.cluster.local, the request gets redirected to example.com.

Using kubectl expose to create a Service

As an alternative to writing a Service manifest, you can create a Service by using kubectl expose to expose a Deployment.

To expose my-deployment, shown earlier in this topic, you could enter this command:

kubectl expose deployment my-deployment --name my-cip-service \
    --type ClusterIP --protocol TCP --port 80 --target-port 8080

To expose my-deployment-50000, show earlier in this topic, you could enter this command:

kubectl expose deployment my-deployment-50000 --name my-np-service \
    --type NodePort --protocol TCP --port 80 --target-port 50000

To expose my-deployment-50001, shown earlier in this topic, you could enter this command:

kubectl expose deployment my-deployment-50001 --name my-lb-service \
    --type LoadBalancer --port 60000 --target-port 50001

Cleaning up

After completing the exercises on this page, follow these steps to remove resources and prevent unwanted charges incurring on your account:

kubectl apply

Deleting your Services

kubectl delete services my-cip-service my-np-service my-lb-service

Deleting your Deployments

kubectl delete deployments my-deployment my-deployment-50000 my-deployment-50001

Deleting your firewall rule

gcloud compute firewall-rules delete test-node-port

Console

Deleting your Services

  1. Visit the Google Kubernetes Engine Services menu in GCP Console.

    Visit the Services menu

  2. Click my-cip-service, and click Delete.

  3. Click my-np-service, and click Delete.

  4. Click my-lb-service, and click Delete.

Deleting your Deployments

  1. Visit the Google Kubernetes Engine Workloads menu in GCP Console.

    Visit the Workloads menu

  2. Click my-deployment, and click Delete. Leave the Delete horizontal pod autoscaler my-deployment-hpa box selected, and click Delete.

  3. Click my-deployment-50000, and click Delete. Leave the Delete horizontal pod autoscaler my-deployment-50000-hpa box selected, and click Delete.

  4. Click my-deployment-50001, and click Delete. Leave the Delete horizontal pod autoscaler my-deployment-50001-hpa box selected, and click Delete.

Deleting your firewall rule

  1. Visit the Firewall rules menu in GCP Console.

    Visit the Firewall rules menu

  2. Click test-node-port, and click Delete.

What's next

Was this page helpful? Let us know how we did:

Send feedback about...

Kubernetes Engine