The evolution of Kubernetes networking with the GKE Gateway controller
Mark Church
Product Manager, Google Cloud
Bowei Du
Senior Staff Software Engineer, Cloud Networking
Last week the Kubernetes community announced the Gateway API as an evolution of the Kubernetes networking APIs. Led by Google and a variety of contributors, the Gateway API unifies networking under a core set of standard resources. Similar to how Ingress created an ecosystem of implementations, the Gateway API delivers unification, but on an even broader scope and based on lessons from Ingress and service mesh.
Today we’re excited to announce the Preview release of the GKE Gateway controller, Google Cloud’s implementation of the Gateway API. Over a year in the making, the GKE Gateway controller manages internal and external HTTP/S load balancing for a GKE cluster or a fleet of GKE clusters. The Gateway API provides multi-tenant sharing of load balancer infrastructure with centralized admin policy and control. Thanks to the API’s expressiveness, it supports advanced functionality such as traffic shifting, traffic mirroring, header manipulation and more. Take the tour below to learn more!
A tour of the GKE Gateway controller
The first new resource you’ll encounter in the GKE Gateway controller is the GatewayClass. GatewayClasses provide a template that describes the capabilities of the class. In every GKE 1.20+ cluster you’ll see two pre-installed GatewayClasses. Go spin up a GKE 1.20+ cluster and check for them right now!
These GatewayClasses correspond to the regional Internal (gke-l7-rilb
) and global External (gke-l7-gxlb
) HTTP(S) Load Balancers, which are orchestrated by the Gateway controller to provide container-native load balancing.
Role-oriented design
The Gateway API is designed to be multi-tenant. It introduces two primary resources that create a separation of concerns between the platform owner and the service owner:
Gateways represent the load balancer that is listening for traffic that it routes. You can have multiple Gateways, one per team or just a single Gateway that is shared among different teams.
Routes are the protocol-specific routing configuration that are applied to these Gateways. GKE supports HTTPRoutes today, with TCPRoutes and UDPRoutes on the roadmap. One or more Routes can bind to a Gateway and together they define the routing configuration of your application.
The following example (which you can deploy in this tutorial), shows how the cluster operator deploys a Gateway resource to be shared by different teams, even across different Namespaces. The owner of the Gateway can define domain ownership, TLS termination, and other policies in a centralized way without involving service owners. On the other hand, service owners can define routing rules and traffic management that are specific to their app, without having to coordinate with other teams or with the platform administrators. The relationship between Gateway and Route resources creates a formal separation of responsibilities that can be managed by standard Kubernetes RBAC.
Advanced routing and traffic management
The GKE Gateway controller introduces new routing and traffic management capabilities. The following HTTPRoute was deployed by the Store team in their Namespace. It matches traffic for foo.example.com/store and applies these traffic rules:
90% of the client traffic goes to store-v1
10% of the client traffic goes to store-v2 to canary the next version of the store
All of the client traffic is also copied and mirrored to store-v3 for scalability testing of the following version of the store
Native multi-cluster support
The GKE Gateway controller is built with native multi-cluster support for both internal and external load balancing via its multi-cluster GatewayClasses. Multi-cluster Gateways load-balance client traffic across a fleet of GKE clusters. This targets various use cases including:
- Multi-cluster or multi-region redundancy and failover
- Low latency serving through GKE clusters that have close proximity to clients
- Blue-green traffic shifting between clusters (try this tutorial)
- Expansion to multiple clusters because of organizational or security constraints
Once ingress is enabled in your Hub, the multi-cluster GatewayClasses (with `-mc`) will appear in your GKE cluster, ready to use across your fleet.
The GKE Gateway controller is tightly integrated with multi-cluster Services so that GKE can provide both north-south and east-west multi-cluster load balancing. Multi-cluster Gateways leverage the service discovery of multi-cluster Services so that they have a full view of Pod backends. External multi-cluster Gateways provide internet load balancing and internal multi-cluster Gateways provide internal, private load balancing. Whether your traffic flows are east-west, north-south, public, or private, GKE provides all the multi-cluster networking capabilities that you need right out of the box.
The GKE Gateway controller is Google Cloud’s first implementation of the Gateway API. Thanks to a loosely coupled resource model, TCPRoutes, UDPRoutes, and TLSRoutes will also soon be added to the Gateway API specification, expanding its scope of capabilities. This is the beginning of a new chapter in Kubernetes Service networking and there is a long road ahead!
Learn more
There are many resources available to learn about the Gateway API and how to use the GKE Gateway controller. Check out one of these Learn K8s tutorials on Gateway API concepts: