Creating a load balancer

This topic shows you how to set up an AWS Elastic Load Balancer (ELB) with GKE on AWS.

When you create a Service of type LoadBalancer, a GKE on AWS controller configures a Classic or Network ELB on AWS.

You can also follow the Quickstart to create an externally facing Classic ELB from the Google Cloud console or Create an AWS Application Load Balancer (ALB).

Before you begin

Before you start using GKE on AWS, make sure you have performed the following tasks:

  • Install a management service.
  • Create a user cluster.
  • From your anthos-aws directory, use anthos-gke to switch context to your user cluster.
    cd anthos-aws
    env HTTPS_PROXY=http://localhost:8118 \
      anthos-gke aws clusters get-credentials CLUSTER_NAME
    Replace CLUSTER_NAME with your user cluster name.
  • Have the curl command line tool or a similar tool installed.

Selecting an external or internal load balancer

GKE on AWS creates an external (in your public subnet) or internal (in your private subnet) load balancer depending on an annotation to the LoadBalancer resource.

If you select an external load balancer, it is accessible by the IP addresses allowed in the node pool's security groups and the subnet's network access control lists (ACLs).

Choosing a load balancer type

Choose if you want to create a Classic Load Balancer (Classic ELB) or a Network Load Balancer (NLB). For more information on the differences between load balancer types, see Load balancer types in the AWS documentation.

Creating a LoadBalancer

You create a load balancer by creating a deployment and exposing that deployment with a service.

  1. Create your deployment. Containers in this Deployment listen on port 50001. Save the following YAML to a file named my-deployment-50001.yaml:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: my-deployment-50001
    spec:
      selector:
        matchLabels:
          app: products
          department: sales
      replicas: 3
      template:
        metadata:
          labels:
            app: products
            department: sales
        spec:
          containers:
          - name: hello
            image: "gcr.io/google-samples/hello-app:2.0"
            env:
            - name: "PORT"
              value: "50001"
    
  2. Create the Deployment with kubectl apply:

    env HTTPS_PROXY=http://localhost:8118 \
      kubectl apply -f my-deployment-50001.yaml
    
  3. Verify that three Pods are running:

    env HTTPS_PROXY=http://localhost:8118 \
      kubectl get pods --selector=app=products
    
  4. Create a Service of type LoadBalancer for your deployment. You can create a Classic or Network ELB on either your public or private subnet. Choose from one of the following options:

    • A Classic ELB on the public subnet
    • An NLB on the public subnet
    • A Classic ELB on the private subnet
    • An NLB on the private subnet

    Then, copy the following manifest to a file named my-lb-service.yaml.

    Classic Public

    apiVersion: v1
    kind: Service
    metadata:
      name: my-lb-service
    spec:
      type: LoadBalancer
      selector:
        app: products
        department: sales
      ports:
      - protocol: TCP
        port: 60000
        targetPort: 50001
    

    NLB Public

    You create a NLB by setting the annotation service.beta.kubernetes.io/aws-load-balancer-type to nlb. The following YAML includes this annotation.

    apiVersion: v1
    kind: Service
    metadata:
      name: my-lb-service
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-type: nlb
    spec:
      type: LoadBalancer
      selector:
        app: products
        department: sales
      ports:
      - protocol: TCP
        port: 60000
        targetPort: 50001
    

    Classic Private

    You create a private LoadBalancer by setting the annotation service.beta.kubernetes.io/aws-load-balancer-internal to "true". The following YAML includes this annotation.

    apiVersion: v1
    kind: Service
    metadata:
      name: my-lb-service
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-internal: "true"
    spec:
      type: LoadBalancer
      selector:
        app: products
        department: sales
      ports:
      - protocol: TCP
        port: 60000
        targetPort: 50001
    

    NLB Private

    You create a private NLB by setting the annotations:

    • service.beta.kubernetes.io/aws-load-balancer-internal to "true"
    • service.beta.kubernetes.io/aws-load-balancer-type to nlb

    The following YAML includes both annotations.

    apiVersion: v1
    kind: Service
    metadata:
      name: my-lb-service
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-internal: "true"
        service.beta.kubernetes.io/aws-load-balancer-type: nlb
    spec:
      type: LoadBalancer
      selector:
        app: products
        department: sales
      ports:
      - protocol: TCP
        port: 60000
        targetPort: 50001
    
  5. Create the Service with kubectl apply:

    env HTTPS_PROXY=http://localhost:8118 \
      kubectl apply -f my-lb-service.yaml
    
  6. View the Service's hostname with kubectl get service.

    env HTTPS_PROXY=http://localhost:8118 \
      kubectl get service my-lb-service \
      --output jsonpath="{.status.loadBalancer.ingress..hostname}{'\n'}"
    

    The output resembles elb-id.elb.aws-region.amazonaws.com.

  7. If you have created an externally-facing load balancer and you have access to the public VPC subnet, you can connect to the load balancer with curl. Replace external-ip with the IP from the output of kubectl get service from the previous step.

    curl external-ip:60000
    

    The output resembles the following:

    Hello, world!
    Version: 2.0.0
    Hostname: my-deployment-50001-84b6dc5555-zmk7q
    

Cleaning up

To remove the Service and Deployment, use kubectl delete.

env HTTPS_PROXY=http://localhost:8118 \
  kubectl delete -f my-lb-service.yaml

env HTTPS_PROXY=http://localhost:8118 \
  kubectl delete -f my-deployment-50001.yaml

Troubleshooting

If you cannot access a load balancer endpoint, try tagging your subnets.

What's Next