Configuring TCP/UDP load balancing

Overview

You can create a TCP/UDP Load Balancer by creating a Service with type: LoadBalancer in its specification. This page explains the parameters you can use to configure LoadBalancer Services. For more information specific to internal load balancers, see Using an internal TCP/UDP load balancer. For more information specific to external load balancers, see Exposing applications using services.

Service parameters

The following parameters are supported for Google Kubernetes Engine (GKE) LoadBalancer Services.

Feature Summary Service Field GKE Version Support
Local External Traffic Policy Configures whether or not external traffic is load balanced across GKE nodes. spec:externalTrafficPolicy:Local GKE 1.14+
Load Balancer Source Ranges Configures optional firewall rules in GKE and in the VPC to only allow certain source ranges. spec:loadBalancerSourceRanges All supported versions
Load Balancer IP Specifies an IP for the load balancers spec:loadBalancerIP All supported versions
All-ports The ability for the TCP/UDP load balancer to forward all ports instead of specific ports N/A No native support

External traffic policy

The externalTrafficPolicy is a standard Service option that defines how and whether traffic incoming to a GKE node is load balanced. Cluster is the default policy but Local is often used to preserve the source IP of traffic coming into a cluster node. Local effectively disables load balancing on the cluster node so that traffic that is received by a local Pod sees the original source IP address.

externalTrafficPolicy is supported for internal LoadBalancer Services (via the TCP/UDP load balancer), but load balancing behavior depends on where traffic originates from and the configured traffic policy.

Traffic sourced from outside the cluster to a TCP/UDP load balancer will have the following behavior if there is at least one healthy Pod of the Service in the cluster:

  • Cluster policy: Traffic will be load balanced to any healthy GKE node in the cluster and then the kube-proxy will send it to a node with the Pod.
  • Local policy: Nodes that do not have one of the backend Pods appear as unhealthy to the TCP/UDP load balancer. Traffic will only be sent to one of the remaining healthy cluster nodes which has the Pod. Traffic is not routed again by the kube-proxy and instead will be sent directly to the local Pod with its IP header information intact.

If traffic to a given LoadBalancer Service IP is sourced from a GKE node inside the cluster, there is a different traffic behavior. The following table summarizes the traffic behavior for traffic sourced by a node or Pod inside the cluster destined for a member Pod of a LoadBalancer Service:

externalTrafficPolicy Service member Pod running on same node where traffic originates? Traffic behavior
Cluster Yes Packets are delivered to any member Pod, either on the node or on a different node.
Cluster No Packets are delivered to any member Pod, which must be on a different node.
Local Yes Packets are delivered to any member Pod on the same node.
Local No

Kubernetes 1.14 and earlier: Packets are dropped.

Kubernetes 1.15 and later: Packets are delivered to any member Pod, which must be on a different node.

Load balancer source ranges

The spec: loadBalancerSourceRanges array specifies one or more internal IP address ranges. loadBalancerSourceRanges restricts traffic through the load balancer to the IPs specified in this field. With this configuration, kube-proxy creates the corresponding iptables rules in Kubernetes nodes. GKE also creates a firewall rule in your VPC network automatically. If you omit this field, your Service accepts traffic from any IP address (0.0.0.0/0).

For more information about the Service specification, see the Service API reference.

Load balancer IP

The spec: loadBalancerIP enables you to choose a specific IP address for the load balancer. The IP address must not be in use by another internal TCP/UDP load balancer or Service. If omitted, an ephemeral IP is assigned. For more information, see [Reserving a Static Internal IP Address].

All-ports

If you create an internal TCP/UDP load balancer by using an annotated Service, there is no way to set up a forwarding rule that uses all ports. However, if you create an internal TCP/UDP load balancer manually, you can choose your Google Kubernetes Engine nodes' instance group as the backend. Kubernetes Services of type: NodePort are available through the ILB.

What's next