Network load balancing

This topic shows you how to set up an L4 load balancer backed by an Azure Standard Load Balancer using GKE on Azure .

When you create a Service of type LoadBalancer, a GKE on Azure controller configures an Azure Load Balancer.

Before you begin

Selecting a public or private load balancer

Service load balancers can be either public — having public frontend IPs — or internal— only accessible through private IPs.

By default, a new Service is public. To create an internal load balancer, you set the service.beta.kubernetes.io/azure-load-balancer-internal annotation to "true" in your manifest.

Choosing subnet for internal load balancers

When creating an internal load balancer, GKE on Azure needs to pick the subnet to place the load balancer in. This default service load balancer subnet is chosen from the cluster's creation parameters as follows:

  1. If specified and non-empty, cluster.networking.serviceLoadBalancerSubnetId
  2. Otherwise, cluster.controlPlane.subnetId

Alternately, you can specify the subnet to use for a given load balancer by adding the service.beta.kubernetes.io/azure-load-balancer-internal-subnet annotation to the Service. The value for this annotation is the subnet's name.

Creating an example LoadBalancer

You create a load balancer by creating a deployment and exposing that deployment with a service.

  1. Create your deployment. Containers in this Deployment listen on port 50001. Save the following YAML to a file named my-deployment-50001.yaml:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: my-deployment-50001
    spec:
      selector:
        matchLabels:
          app: products
          department: sales
      replicas: 3
      template:
        metadata:
          labels:
            app: products
            department: sales
        spec:
          containers:
          - name: hello
            image: "gcr.io/google-samples/hello-app:2.0"
            env:
            - name: "PORT"
              value: "50001"
    
  2. Create the Deployment with kubectl apply:

    kubectl apply -f my-deployment-50001.yaml
    
  3. Verify that three Pods are running:

    kubectl get pods --selector=app=products
    
  4. Create a Service of type LoadBalancer for your deployment. You can create an Azure Standard Load Balancer that is either public, or internal. Choose from one of the following options.

    Copy one of the following manifests to a file named my-lb-service.yaml.

    Public

    apiVersion: v1
    kind: Service
    metadata:
      name: my-lb-service
    spec:
      type: LoadBalancer
      selector:
        app: products
        department: sales
      ports:
      - protocol: TCP
        port: 60000
        targetPort: 50001
    

    Internal

    You create an internal LoadBalancer by setting the annotation service.beta.kubernetes.io/azure-load-balancer-internal to "true". The following YAML includes this annotation. yaml apiVersion: v1 kind: Service metadata: name: my-lb-service annotations: service.beta.kubernetes.io/azure-load-balancer-internal: "true" spec: type: LoadBalancer selector: app: products department: sales ports: - protocol: TCP port: 60000 targetPort: 50001

  5. Create the Service with kubectl apply:

    kubectl apply -f my-lb-service.yaml
    
  6. View the Service's address with kubectl get service.

    kubectl get service my-lb-service
    

    The output will include a column EXTERNAL-IP with an address of the load balancer (either public or private depending how the load balancer was created).

  7. If you have created a public load balancer you can connect to the load balancer with curl. Replace external-ip with the address from the output of kubectl get service from the previous step.

    curl http://external-ip:60000
    

    The output resembles the following:

    Hello, world!
    Version: 2.0.0
    Hostname: my-deployment-50001-84b6dc5555-zmk7q
    

Cleaning up

To remove the Service and Deployment, use kubectl delete.

kubectl delete -f my-lb-service.yaml
kubectl delete -f my-deployment-50001.yaml