Anthos in depth: Easy load balancing for your on-prem workloads
Mahesh Narayanan
Product Manager, GKE
Yuan Liu
Software Engineer, GKE
For organizations that need to run their workloads on-prem, Anthos is a real game changer. As a hybrid multi-cloud platform that’s managed by Google Cloud, Anthos includes all the innovations that we’ve developed for Google Kubernetes Engine (GKE) over the years, but running in the customer’s data center. And as such, Anthos can integrate with your existing on-prem networking stack.
One of the key pieces of integration is getting traffic into the Anthos cluster, which often involves using an external load balancer. When running Anthos on Google Cloud, you create a Kubernetes service accessible from the internet through Ingress or servicetype load balancer, and Google Cloud takes care of assigning the virtual IP (VIP) and making it available to the rest of the world. In contrast, when running Anthos on-prem, advertising the service’s VIP to your on-prem network happens using an external load balancer.
Anthos provides three different options for deploying an external load balancer: the F5 Container Ingress Services (CIS) controller; manually mapping your load balancer to Kubernetes with static mapping; and Anthos’ own bundled load balancer.
In this post, we’ll introduce these three options and dive deep into the Anthos bundled load balancer.
F5 load balancing
In this mode, Anthos integrates with F5 by including the F5 Container Ingress Services (CIS) controller with Anthos running on-prem. This approach is ideal if you have an existing investment in F5 load balancing and want to use it with your Anthos on-prem cluster.
Manual load balancing
If you have another third-party load balancer, you can manually map your external load balancer to your Kubernetes resources, allowing you to use the load balancer of your choice. As there is no controller here to map the Kubernetes resources to the external load balancer, you need to perform static mapping of the load balancer service.
Anthos-bundled load balancing
In both the above modes, there are costs (licensing and hardware) and expertise associated with managing the external load balancer. More importantly, there can be organizational friction, both technical and non-technical, as external load balancers and Anthos clusters are often managed by different teams. Anthos’ bundled load balancer provides an option to customers who want to program the VIP dynamically, without having to configure or support a third-party option.
The Anthos-bundled load balancer takes care of integrating external load balancer functionality as well as announcing the VIP to the external world. In contrast to the previous modes, Anthos itself now bridges the Kubernetes domain with the rest of your network. This approach brings several advantages:
The team managing the on-prem Anthos cluster also manages the advertisement of VIPs. This mitigates the requirements for any tight collaboration and dependency between different organizations, groups and admins.
Costs are streamlined, as you don’t have to manage a separate invoice, bill or vendor for your external load balancing needs.
Simplified management, as Anthos controls both the controller as well as the VIP announcement. This has benefits in operational management, support, provisioning etc., making it a more seamless experience.
Multinational investment banking firm HSBC uses Anthos’ bundled load balancer and reports that it’s easy to install and configure, with minimal system requirements.
“Anthos running on-premises has brought the best of Google’s managed Kubernetes to our data centers. Specifically, the bundled load-balancer provides HSBC with a highly available, high performing, layer 4 load-balancer with minimal system requirements. Configuration and installation are simple and automate deployment for each new on-prem cluster. This decreases our time to market, installation complexity, and costs for each cluster we deploy.” - Scott Surovich Global Container Engineering Lead - HSBC Operations, Services & Technology
Using the Anthos bundled load balancer
Using Anthos’ bundled load balancer on-prem is a relatively straightforward process.
The bundled load balancer uses the Seesaw load balancer, which Google created and open sourced. In high availability mode, two instances run in active-passive pairs talking the standard Virtual Router Redundancy Protocol (VRRP). The passive instance becomes the active if it does not receive an advertisement from the active instance for two seconds, based on today’s default configuration.
You can create a load-balancer-typed Kubernetes service to expose your application through the bundled load balancer. For example:
Here, the bundled load balancer exposes a service to clients at port 80. The service config is sent to the load balancer automatically, which begins to announce SVIP by replying to ARP (address resolution protocol) requests. The load balancer runs in IPVS gatewaying mode (also known as “direct routing” mode), not touching the IP layer of packets and delivering packets to a Kubernetes node by modifying the destination MAC address. The advantage of running in this mode is that it doesn’t add any additional IP headers to the traffic, and therefore does not impact performance. The Kubernetes data plane (iptables in this case) on the node then picks up the packets destined to SVIP:80 and routes them to backends pods. Thanks to the gatewaying mode, the load balancer achieves “Direct Server Return (DSR)” and the responses bypass the load balancers. This saves capacity needed for the load balancers. Also because of DSR, the client IP can be visible in pods by setting “externalTrafficPolicy” to “Local” on the service.
No external load balancer? No problem
If you don’t have an external load balancer that’s qualified for your network—or don’t have the in-house expertise to set one up—Anthos’ bundled load balancer can help. And thankfully, it’s easy to set up and use. Click here to learn more about Anthos’ networking capabilities, and stay tuned for our upcoming post, where we’ll show you how to use GKE private clusters for increased security and compliance.