Standalone network endpoint groups

This page shows how to create a Kubernetes Service that is backed by a network endpoint group.

Overview

Google Cloud Platform has a variety of load balancing components that you can combine in different ways to provide load balancing for your applications. The steps in this topic combine the following components to provide load balancing for a Kubernetes Service:

The following diagram illustrates the connections between the components:

Diagram of load balancing components for a network endpoint group

Here's an overview of the steps you perform in this topic:

  1. Create a Deployment that has three Pods. Each Pod has a container listening on TCP port 50000.

  2. Create a Service. In your Service manifest, include the cloud.google.com/neg annotation. This causes a GKE controller to create a network endpoint group.

  3. Create a health check.

  4. Create a backend service. Connect the backend service to your health check and your network endpoint group.

  5. Create a TCP proxy. Connect the TCP proxy to your backend service.

  6. Create a forwarding rule. Connect the forwarding rule to your TCP proxy.

  7. Test your forwarding rule.

Before you begin

You must have a cluster that is running Google Kubernetes Engine version 1.10 or later.

Your cluster must be VPC-native. To learn more, see to Creating VPC-native clusters using Alias IPs.

Your cluster must have HTTP load-balancing enabled. GKE clusters have HTTP load-balancing enabled by default; you must not disable it.

To prepare for this task, perform the following steps:

  • Ensure that you have enabled the Google Kubernetes Engine API.
  • Enable Google Kubernetes Engine API
  • Ensure that you have installed the Cloud SDK.
  • Set your default project ID:
    gcloud config set project [PROJECT_ID]
  • If you are working with zonal clusters, set your default compute zone:
    gcloud config set compute/zone [COMPUTE_ZONE]
  • If you are working with regional clusters, set your default compute region:
    gcloud config set compute/region [COMPUTE_REGION]
  • Update gcloud to the latest version:
    gcloud components update

Creating a network endpoint group

Here's a manifest for a Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-deployment
spec:
  selector:
    matchLabels:
      purpose: demo
      topic: standalone-neg
  replicas: 3
  template:
    metadata:
      labels:
        purpose: demo
        topic: standalone-neg
    spec:
      containers:
      - name: hello
        image: debian
        command: ["/bin/bash"]
        args:
        - "-c"
        - "apt-get update && apt-get install -y netcat && while true; do echo Hello TCP | nc -l -p 50000; done"

The Deployment has three Pods, each of which has one container.

The cmd and args fields in the Deployment manifest use netcat (nc) to implement a simple TCP server that listens for connections on port 50000.

Save the manifest to a file named my-deployment.yaml, and create the Deployment:

kubectl apply -f my-deployment.yaml

Here's a manifest for a Service:

apiVersion: v1
kind: Service
metadata:
  name: my-service
  annotations:
    cloud.google.com/neg: '{"exposed_ports": {"80":{}}}'
spec:
  type: ClusterIP
  selector:
    purpose: demo
    topic: standalone-neg
  ports:
  - name: my-service-port
    protocol: TCP
    port: 80
    targetPort: 50000

For the purpose of this exercise, these are the important things to understand about the Service:

  • Any Pod that has the label purpose: demo and the label topic: standalone-neg is a member of the Service.

  • The Service has one servicePort structure named my-service-port. The cloud.googe.com/neg annotation says that my-service-port will be associated with a network endpoint group.

  • Each member Pod must have a container that is listening on TCP port 50000.

Save the manifest to a file named my-service.yaml, and create the Service:

kubectl apply -f my-service.yaml

When you create the Service, a GKE controller creates a network endpoint group.

Wait a minute for the network endpoint group to be created. Then you can inspect the network endpoint group using gcloud or Google Cloud Platform Console.

gcloud

List the network endpoint groups:

gcloud beta compute network-endpoint-groups list

The output shows that your network endpoint group has three endpoints:

NAME                                          LOCATION       ENDPOINT_TYPE   SIZE
k8s1-70aa83a6-default-my-service-80-c9710a6f  us-central1-a  GCE_VM_IP_PORT  3

View the endpoints:

gcloud beta compute network-endpoint-groups list-network-endpoints \
    [NETWORK_ENDPOINT_GROUP_NAME]

where [NETWORK_ENDPOINT_GROUP_NAME] is the name of your network endpoint group.

The output shows three endpoints. Each endpoint is a (Pod IP address, port) pair.

INSTANCE                                           IP_ADDRESS  PORT
gke-standard-cluster-3-default-pool-4cc71a15-qlpf  10.12.1.43  50000
gke-standard-cluster-3-default-pool-4cc71a15-qlpf  10.12.1.44  50000
gke-standard-cluster-3-default-pool-4cc71a15-w9nk  10.12.2.26  50000
````

Console

  1. Visit the Network endpoint groups list in GCP Console.

    Visit the Network endpoint groups

    A list of your project's Network endpoint groups appears.

  2. Click the name of your Network endpoint. The details of your Network endpoint group appear, including the number of endpoints, subnets, IP addresses and ports.

Creating a firewall rule

Later in this exercise, you create a health check and a TCP proxy.

Create a firewall rule to allow TCP traffic from the TCP proxy and the health check:

gcloud compute firewall-rules create my-fwr --network default \
    --source-ranges 130.211.0.0/22,35.191.0.0/16 --allow tcp

Creating a health check and a backend service

Create a health check:

gcloud beta compute health-checks create tcp my-hc --use-serving-port --global

Create a backend service:

gcloud compute backend-services create my-bes --global \
    --protocol tcp --health-checks my-hc

Add your network endpoint group to the backend service.

gcloud beta compute backend-services add-backend my-bes --global \
   --network-endpoint-group [NETWORK_ENDPOINT_GROUP_NAME] \
   --network-endpoint-group-zone [NETWORK_ENDPOINT_GROUP_ZONE] \
   --balancing-mode CONNECTION --max-connections-per-endpoint 5

where:

  • [NETWORK_ENDPOINT_GROUP_NAME] is the name of your network endpoint group.
  • [NETWORK_ENDPOINT_GROUP_ZONE] is the zone of your network endpoint group.

Describe your backend service:

gcloud compute backend-services describe my-bes --global

The output shows that the backend service is associated with your health check and your network endpoint group.

...
backends:
- balancingMode: CONNECTION
  capacityScaler: 1.0
  group: ... /networkEndpointGroups/k8s1-70aa83a6-default-my-service-80-c9710a6f
...
healthChecks:
- ... /healthChecks/my-hc
...
name: my-bes
...

Wait five minutes for your backend service to be configured.

Check the health of your backend service:

gcloud beta compute backend-services get-health my-bes --global

The output shows the health of each endpoint:

...
status:
  healthStatus:
  - healthState: HEALTHY
    instance: ... gke-standard-cluster-3-default-pool-4cc71a15-qlpf
    ipAddress: 10.12.1.43
    port: 50000
  - healthState: HEALTHY
    instance: ... gke-standard-cluster-3-default-pool-4cc71a15-qlpf
    ipAddress: 10.12.1.44
    port: 50000
  - healthState: HEALTHY
    instance: ... gke-standard-cluster-3-default-pool-4cc71a15-w9nk
    ipAddress: 10.12.2.26
    port: 50000

Creating a TCP proxy and a forwarding rule

Create a TCP proxy:

gcloud compute target-tcp-proxies create my-tp --backend-service my-bes

Create a forwarding rule:

gcloud compute forwarding-rules create my-for --global \
    --ip-protocol tcp --ports 25 --target-tcp-proxy my-tp

View the forwarding rule:

gcloud compute forwarding-rules describe my-for --global

The output shows the external IP address of the forwarding rule:

IPAddress: 203.0.113.1
IPProtocol: TCP
...
kind: compute#forwardingRule
loadBalancingScheme: EXTERNAL
name: my-for
...
portRange: 25 - 25

Wait ten minutes for the forwarding rule to be configured.

Use netcat to test the forwarding rule:

nc -v [EXTERNAL_IP_ADDRESS] 25

where [EXTERNAL_IP_ADDRESS] is the external IP address of the forwarding rule.

The output shows the reply from one of your containers:

Connection to 203.0.113.1 25 port [tcp/smtp] succeeded!
Hello TCP

Cleaning up

After completing the tasks on this page, enter these commands to remove the resources to prevent unwanted charges incurring on your account:

gcloud compute forwarding-rules delete my-for --global --quiet
gcloud compute target-tcp-proxies delete my-tp --quiet
gcloud compute backend-services delete my-bes --quiet --global
gcloud compute firewall-rules delete my-fwr --quiet
gcloud compute health-checks delete my-hc --quiet
kubectl delete service my-service
kubectl delete deployment my-deployment

What's next

Var denne siden nyttig? Si fra hva du synes:

Send tilbakemelding om ...

Kubernetes Engine Documentation