Jump to Content
Containers & Kubernetes

Announcing the general availability of Network Function Optimizer for GKE Enterprise

February 13, 2024
Mahesh Narayanan

Senior Product Manager

Susan Wu

Outbound Product Manager

Try Gemini 1.5 Pro

Google's most advanced multimodal model in Vertex AI

Try it

Traditionally, customers have relied on VMs to run their network-intensive applications. But for organizations who want to migrate network-intensive VM-based applications to Kubernetes and achieve the same level of performance, they need to add advanced container networking capabilities to their Pods, for example the ability for Pods to have multiple interfaces, and high-performance acceleration technology for Pods. 

Today, we’re excited to announce that Network Function Optimizer is generally available for GKE Enterprise, the premium edition of Google Kubernetes Engine. As part of GKE Enterprise, Network Function Optimizer delivers the enterprise scale and high data-plane performance for containerized applications that our customers have been looking for, including the functionality that we’ve developed as part of our Multi-network Kubernetes Enhancement Proposal and our Multi-network, New Level of Multi-tenancy presentation into the Kubernetes community. On top of that, we’ve packaged all these advanced container networking features to make it easy for you to migrate your network-intensive workloads into GKE Enterprise in one nifty solution! 

Announced at the Mobile World Congress 2023, Network Function Optimizer in GKE Enterprise lets you apply the container networking enhancements directly from the GKE console, to support additional use cases in AI/ML, telco and containerized security. Some key use cases and benefits include:

  • Extending multi-network capabilities to Pods that run on the nodes. With multi-network support for Pods, you can enable multiple interfaces on nodes and Pods in a GKE cluster, allowing for data-plane and control-plane separation. This is a strict requirement for the deployment of containerized network functions (CNFs) such as containerized firewalls, containerized IDS/IPS, containerized proxies etc.

  • Delivering a high-performance data plane natively in software that’s comparable to those assisted by hardware, simplifying workload scheduling on any Pod and removing underlying hardware/NIC dependency. With this flexibility, you can now move your applications to containers and benefit from the advantages containers can bring such as autoscaling, bin-packing, and portability.

Workload-optimized infrastructure for AI 

With the rapid adoption of AI/ML, enterprises are building AI platforms on Kubernetes to serve models and perform inference for development teams and users across their organization. 

Two open-source projects in particular are enabling AI/ML at scale in the Kubernetes community: Kubeflow, which originated at Google, and is now a CNCF Incubator project, provides a multi-user environment and interactive notebook management. Ray orchestrates distributed computing workloads across the entire ML lifecycle, including training and serving. Although Ray is not a Kubernates-native project, the community’s kubeRay toolkit simplifies Ray deployment on Kubernetes; we provide a Terraform template for running Ray on GKE.

Network Function Optimizer running on GKE complements these solutions, providing low-latency, high performance network interfaces with which to run AI/ML workloads such as Kubeflow and Ray.

“Infrastructure services that make it easier to scale out are critical for our customers to fully take advantage of the new next generation AmpereOne™ powered C3A compute instances. C3A instances are positioned to be the most dynamically scalable compute instances on the market today, especially paired with services like GKE and Google’s new Network Function Optimizer services.” - Jeff Wittich, Chief Product Officer, Ampere

Market-leading threat protection with robust security ecosystem 

Google Cloud customers want threat protection that’s built-in, but they also want the flexibility to bring in their preferred security vendors from on-premises to their cloud environments. 

Deploying a single or several GKE clusters in a shared VPC can be relatively straightforward, but what happens if you need your Kubernetes clusters to communicate with multiple VPCs? Network Function Optimizer can help. Its multi-network Pod capabilities enable multiple VPC networks to communicate with the GKE cluster.

Another foundational feature is its high-performance data plane. As part of Network Function Optimizer, containerized security services like a firewall, intrusion detection, VPN, etc. combined with native Linux data-plane acceleration, have comparable performance to hardware-based acceleration technology — but with the abstraction, flexibility and portability inherent to software. With this, the security service can be scheduled on any Pod, removing the dependency between the containerized security service and the underlying NIC hardware. 

This way, you can continue to use Google Cloud’s built-in network security solutions with the flexibility to bring in your choice of third-party security solutions. With Network Function Optimizer, you can create security services with multiple layers of inspection provided by your preferred firewall and deep-packet-inspection vendors, and balance the network-intensive security-inspection load across many Pods. 

“In the cloud and data-centric world, threats are coming from internal and external sources, with a rapidly disappearing perimeter. Broadcom is partnering with Google to accelerate the delivery of advanced network functions that operate transparently in the data path on traffic as a bump in the wire using GKE and Network Function Optimizer. We are excited to work with Google Cloud in moving the needle forward for the whole industry and be a first adopter of these advanced functions. This enables us to improve our resiliency and increase our velocity, delivering features for our customers.” - Gary Tomic, Principal Architect, Fellow, Broadcom  

Getting started

To learn how to deploy and validate the Network Function Optimizer capabilities into your own GKE deployment, watch this introduction video, demo videos, and visit this codelab, where you’ll learn to enable more than a single interface on Nodes and for Pods in your GKE clusters.

Posted in