Containers & Kubernetes

New GKE Dataplane V2 increases security and visibility for containers

Google Containers.jpg

One of Kubernetes’ true superpowers is its developer-first networking model. It provides easy-to-use features such as L3/L4 services and L7 ingress to bring traffic into your cluster as well as network policies for isolating multi-tenant workloads. As more and more enterprises adopt Kubernetes, the gamut of use cases is widening with new requirements around multi-cloud, security, visibility and scalability. In addition, new technologies such as service mesh and serverless demand more customization from the underlying Kubernetes layer. These new requirements all have something in common: they need a more programmable dataplane that can perform Kubernetes-aware packet manipulations without sacrificing performance.

Enter Extended Berkeley Packet Filter (eBPF), a new Linux networking paradigm that exposes programmable hooks to the network stack inside the Linux kernel. The ability to enrich the kernel with user-space information—without jumping back and forth between user and kernel spaces—enables context-aware operations on network packets at high speeds.

Today, we’re introducing GKE Dataplane V2, an opinionated dataplane that harnesses the power of eBPF and Cilium, an open source project that makes the Linux kernel Kubernetes-aware using eBPF. Now in beta, we’re also using Dataplane V2 to bring Kubernetes Network Policy logging to Google Kubernetes Engine (GKE).

What are eBPF and Cilium?

eBPF is a revolutionary technology that can run sandboxed programs in the Linux kernel without recompiling the kernel or loading kernel modules. Over the last few years, eBPF has become the standard way to address problems that previously relied on kernel changes or kernel modules. In addition, eBPF has resulted in the development of a completely new generation of tooling in areas such as networking, security, and application profiling. These tools no longer rely on existing kernel functionality but instead actively reprogram runtime behavior, all without compromising execution efficiency or safety.

Cilium is an open source project that has been designed on top of eBPF to address the new scalability, security and visibility requirements of container workloads. Cilium goes beyond a traditional Container Networking Interface (CNI) to provide service resolution, policy enforcement and much more as seen in the picture below.

Container Networking Interface.jpg

The Cilium community has put in a tremendous amount of effort to bootstrap the Cilium project, which is the most mature eBPF implementation for Kubernetes out there. We at Google actively contribute to the Cilium project, so that the entire Kubernetes community can leverage the advances we are making with eBPF.

Using eBPF to build Kubernetes Network Policy Logging

Let’s look at a concrete application of how eBPF is helping us solve a real customer pain point. Security-conscious customers use Kubernetes network policies to declare how pods can communicate with one another. However, there is no scalable way to troubleshoot and audit the behavior of these policies, which makes it a non-starter for enterprise customers. With the introduction of eBPF to GKE, we can now support real-time policy enforcement as well as correlate policy actions (allow/deny) to pod, namespace, and policy names at line rate with minimal impact on the node’s CPU and memory resources.

Using eBPF to build Kubernetes Network Policy Logging.jpg

The image above shows how highly specialized eBPF programs are installed into the Linux kernel to enforce network policy and report action logs. As packets come into the VM, the eBPF programs installed in the kernel decide how to route the packet. Unlike IPTables, eBPF programs have access to Kubernetes-specific metadata including network policy information.This way,  they can not only allow or deny the packet, they can also report annotated actions back to user space. These events make it possible for us to generate network policy logs that are meaningful to a Kubernetes user. For instance, the log snippet shown below pinpoints which source pod was trying to connect to which destination pod and which network policy allowed that connection.

log snippet.jpg

Under the hood, Network Policy logging leverages GKE Dataplane V2. Not only does GKE Dataplane V2 expose the information needed for policy logging, it also completely abstracts away the details of configuring network policy enforcement from the user. That is, when you use Dataplane V2, you no longer have to worry about explicitly enabling network policy enforcement or picking the right CNI to use network policy on your GKE clusters. Talk about making Kubernetes easy to use!

Besides network policy, Kubernetes load balancing can also use eBPF to implement Direct Server Return (DSR) mode. DSR eliminates the additional NAT problem that loses the client's IP address when using Kubernetes LoadBalancer services. eBPF’s ability to encode metadata into a network packet on the fly allows us to provide additional information to the destination node such that it can directly converse with the original client. With DSR, we can reduce the bandwidth requirements of each node as well as avoid port exhaustion.

eBPF’s ability to augment network packets with custom metadata enables a long list of possible use cases. We are as excited about the future of Kubernetes and eBPF as you are, so stay tuned for more innovations.

How you can benefit from this

Enterprises are always looking to improve their security posture with better visibility into their infrastructure. They want to be able to quickly identify abnormal traffic patterns such as pods that are unexpectedly talking to the internet and denial-of-service attacks. With Kubernetes Network Policy logging, you can now see all allowed and denied network connections directly in the Cloud Logging console to troubleshoot policies and spot irregular network activity.


To try out Kubernetes Network Policy logging for yourself, create a new GKE cluster with Dataplane V2 using the following command.

  gcloud beta container clusters create <cluster name> \
--enable-dataplane-v2 \
--release-channel rapid \
--enable-ip-alias \
--zone us-east1-d


Google would like to thank Thomas Graf, co-founder of the Cilium project, for his contributions to this blog post.