GKE Dataplane V2


This page gives an overview of what GKE Dataplane V2 does and how it works.

Before you read this page, you should understand networking inside GKE clusters.

Overview of GKE Dataplane V2

GKE Dataplane V2 is a dataplane that is optimized for Kubernetes networking. GKE Dataplane V2 provides:

  • A consistent user experience for networking.
  • Real-time visibility of network activity.
  • Simpler architecture that makes it easier to manage and troubleshoot clusters.

GKE Dataplane V2 is enabled for all new Autopilot clusters in versions 1.22.7-gke.1500 and later, 1.23.4-gke.1500 and later, and all versions 1.24 and later.

How GKE Dataplane V2 works

GKE Dataplane V2 is implemented using eBPF. As packets arrive at a GKE node, eBPF programs installed in the kernel decide how to route and process the packets. Unlike packet processing with iptables, eBPF programs can use Kubernetes-specific metadata in the packet. This lets GKE Dataplane V2 process network packets in the kernel more efficiently and report annotated actions back to user space for logging.

The following diagram shows the path of a packet through a node using GKE Dataplane V2:

GKE deploys the GKE Dataplane V2 controller as a DaemonSet named anetd to each node in the cluster. anetd interprets Kubernetes objects and programs network topologies in eBPF. The anetd Pods run in the kube-system namespace.

GKE Dataplane V2 and NetworkPolicy

GKE Dataplane V2 is implemented using Cilium. The legacy dataplane for GKE is implemented using Calico.

Both of these technologies manage Kubernetes NetworkPolicy. Cilium uses eBPF and the Calico Container Network Interface (CNI) uses iptables in the Linux kernel.

Advantages of GKE Dataplane V2

Scalability

GKE Dataplane V2 has different scalability characteristics than legacy data plane.

For GKE versions where the GKE Dataplane V2 does not use kube-proxy and does not rely on iptables for service routing, GKE removes some iptables related bottlenecks, such as the number of Services.

GKE Dataplane V2 relies on eBPF maps that are limited to 64,000 endpoints across all services.

Security

Kubernetes NetworkPolicy is always on in clusters with GKE Dataplane V2. You don't have to install and manage third-party software add-ons such as Calico to enforce network policy.

Operations

When you create a cluster with GKE Dataplane V2, network policy logging is built-in. Configure the logging CRD on your cluster to see when connections are allowed and denied by your Pods.

Consistency

GKE Dataplane V2 provides a consistent networking experience.

For more information, see Availability of GKE Dataplane V2.

GKE Dataplane V2 technical specifications

GKE Dataplane V2 supports clusters with the following specifications:

Specification GKE GKE on VMware Google Distributed Cloud Virtual for Bare Metal
Number of nodes per cluster 5,000 500 500
Number of Pods per cluster 50,000 15,000 27,500
Number of LoadBalancer Services per cluster 750 500 1,000

GKE Dataplane V2 maintains a Service map to keep track of which Services refer to which Pods as their backends. The number of Pod backends for each Service summed across all Services must all fit into the Service map, which can contain up to 64,000 entries. If this limit is exceeded your cluster may not work as intended.

Node limit increase in version 1.23

Starting in Kubernetes versions 1.23, the limit of 500 nodes per GKE Dataplane V2 cluster has been raised to 5,000, with the following additional conditions imposed on clusters:

  • Private clusters or public clusters that use Private Service Connect. To check if your cluster uses Private Service Connect, see Public clusters with Private Service Connect.
  • Regional clusters only
  • Only clusters that were created with GKE version 1.23 or later have a raised 5,000 node limit. Clusters that were created with earlier GKE versions might require lifting a cluster size quota. Contact support for assistance.
  • Clusters that use Cilium CRDs (CiliumNetworkPolicy and CiliumClusterwideNetworkPolicy) cannot scale to 5,000 nodes.

LoadBalancer Services in GKE on VMware

The number of LoadBalancer Services supported in GKE on VMware depends on the load balancer mode being used. GKE on VMware supports 500 LoadBalancer Services when using bundled load balancing mode (Seesaw) and 250 when using integrated load balancing mode with F5. For more information, see Scalability.

Limitations

GKE Dataplane V2 has the following limitations:

  • GKE Dataplane V2 can only be enabled when creating a new cluster. Existing clusters cannot be upgraded to use GKE Dataplane V2.
  • In GKE versions earlier than 1.20.12-gke.500, if you enable GKE Dataplane V2 with NodeLocal DNSCache, you cannot configure Pods with dnsPolicy: ClusterFirstWithHostNet, or your Pods will experience DNS resolution errors.
  • Starting in GKE version 1.21.5-gke.1300, GKE Dataplane V2 does not support CiliumNetworkPolicy or CiliumClusterwideNetworkPolicy CRD APIs.
  • Manually created internal passthrough Network Load Balancers associated with a Service of type NodePort are not supported.
  • Because GKE Dataplane V2 optimizes eBPF kernel packet processing by using eBPF, your Pod performance might be affected if you have workloads that have a high Pod churn. The primary focus of GKE Dataplane V2 is on achieving optimal eBPF.
  • There is a known issue with multi-cluster Services with multiple (TCP/UDP) ports on GKE Dataplane V2. For more information, see MCS Services with multiple ports.
  • GKE Dataplane V2 uses cilium instead of kube-proxy to implement Kubernetes Services. kube-proxy is maintained and developed by the Kubernetes community, so new features for Services are more likely to be implemented in kube-proxy before they are implemented in cilium for GKE Dataplane V2. One example of a Services feature that was first implemented in kube-proxy is KEP-1669: Proxy Terminating Endpoints.
  • For NodePort Services running version 1.25 or earlier using default SNAT and PUPI ranges, you must add the PUPI range of the Pods in nonMasqueradeCIDRs in the ip-masq-agent ConfigMap to avoid connectivity issues.
  • In certain cases, GKE Dataplane V2 agent Pods (anetd) can consume a significant amount of CPU resources, up to two or three vCPUs per instance. This occurs when there's a high volume of TCP connections being opened and closed rapidly on the node. To mitigate this problem, we recommend implementing keep-alives for HTTP calls and connection pooling for the relevant workloads.
  • GKE Dataplane V2 clusters running control plane version 1.27 or lower don't support the Service .spec.internalTrafficPolicy field. The effective internal traffic policy for a service is Cluster; backends on any node are considered as candidates for Service resolution. For more information on the field, see Service Internal Traffic Policy.

GKE Dataplane V2 and kube-proxy

GKE Dataplane V2 does not use kube-proxy except on Windows Server node pools on GKE versions 1.25 and earlier.

Network policy enforcement without GKE Dataplane V2

See Using network policy enforcement for instructions to enable network policy enforcement in clusters that don't use GKE Dataplane V2.

What's next