Leveraging Class E IPv4 Address space to mitigate IPv4 exhaustion issues in GKE
Basant Amarkhed
Software Engineer, GKE Networking
Gopinath Balakrishnan
Customer Engineer, GCP Networking
As the number of applications and services hosted on Google Kubernetes Engine (GKE) continues to grow, so does the demand for private IPv4 addresses (RFC 1918). For many large organizations, the RFC1918 address space is becoming increasingly scarce, leading to IP address exhaustion challenges that impact their application scale. IPv6 solves this exact address exhaustion issue by providing a lot more addresses. However not all enterprises or applications are ready for IPv6 yet. Enter the Class E IPv4 address space (240.0.0.0/4)
, which can address these issues, so you can continue to grow your business.
As mentioned in Google VPC network valid IPv4 ranges, Class E addresses (240.0.0.0/4) are reserved for future use, as noted in RFC 5735 and RFC 1112 — however, that doesn’t mean you can’t use them today in certain circumstances. In this blog post, we'll delve into common misconceptions about Class E addresses, discuss the benefits and considerations of using them, and provide guidance on planning and using GKE clusters with Class E. Additionally, we'll share a real-world example of how one customer, Snap, successfully leveraged Class E to overcome their IP address exhaustion challenges.
Understanding Class E addresses
The following are some common objections or misconceptions about using Class E addresses:
-
Class E addresses do not work with other Google services. This is not true. Google Cloud VPC includes class E addresses as part of its valid address ranges for IPV4. Further, many Google managed services can be accessed using private connectivity methods with Class E addresses.
-
Using Class E addresses limits communicating with services outside Google (internet / Interconnect to on-prem/other cloud). Misleading. Given that Class E addresses are non-routable and not advertised over the internet or outside of Google Cloud, you can use NAT or IP masquerading to translate Class E addresses to public or private IPv4 addresses to reach destinations outside of Google Cloud. In addition,
-
With the notable exception of Microsoft Windows, many operating systems now support Class E addresses.
-
Many on-prem vendors (Cisco, Juniper, Arista) support routing Class E addresses for private DC use.
-
Class E addresses have performance/scale limitations. This is not true. There is no performance difference for Class E addresses from other address ranges used in Google Cloud. Even with NAT/IP Masquerade, agents can scale to support a large number of connections without impacting performance.
So while Class E addresses are reserved for future use, not routable over the internet, and should not be advertised over the public internet, you can use them for private use within Google Cloud VPCs, for both Compute Engine instances or Kubernetes pods/services in GKE.
Benefits
With these caveats, Class E addresses provide a number of advantages:
- Vast address space - Class E addresses offer a significantly larger pool of IP addresses compared to traditional RFC 1918 private addresses (RFC 1918: ~17.9 million addresses vs Class E: ~268.4 million addresses). This abundance is beneficial for organizations facing IP address exhaustion, allowing them to scale their applications and services without the constraints of limited address space.
- Scalability and growth - The expansive nature of Class E addressing makes it easy to scale applications and services within Google Cloud and GKE. You can deploy and expand your infrastructure without IP address limitations, fostering growth and innovation even during periods of peak usage.
- Efficient resource utilization - With Class E addresses, you can optimize your IP address allocation strategies, minimizing the risk of address conflicts and helping ensuring that IP resources are used effectively. This contributes to both cost savings and streamlined operations.
- Future-proofing - While not all operating systems support Class E, its adoption is expected to grow as the need for more IP addresses increases. By embracing Class E early on, you can future-proof your infrastructure scale to support business growth for many years to come.
Things to be aware of
While Class E IP addresses offer significant benefits, there are some important considerations to keep in mind:
-
Operating system compatibility: Not all operating systems currently support Class E addressing. Before implementing Class E, ensure that your chosen operating system and tools are compatible.
-
Networking equipment and software: Verify that your networking equipment, such as routers and firewalls (third-party virtual appliance solutions running in Google Compute Engine) can handle Class E addresses. Additionally, ensure software or applications that interact with IP addresses are updated to support Class E.
-
Transition and migration: If you're currently using RFC 1918 private addresses, transitioning to Class E requires careful planning and execution to avoid disruptions.
How Snap adopted Class E
The increasing use of microservices and containerization platforms like GKE, especially by large-scale customers like Snap, presents challenges in network IP management. With hundreds of thousands of pods deployed, Snap’s limited pool of RFC1918 private IPv4 addresses was quickly exhausted, hindering cluster scalability and requiring significant manual effort to free up addresses.
Snap initially considered migrating to IPv6, but concerns about application readiness and interoperability led them to adopt dual-stack GKE nodes and GKE pods (IPv6 + Class E IPv4). This solution mitigated IP exhaustion and provided Snap with multiple years of IP address scale needed to support future growth and reduce operational overhead. In addition, this approach also aligned with Snap’s long-term strategy for migrating to IPv6.
“Introducing IPv4 Class-E address space has doubled our IP capacity, solving scaling challenges and eliminated address exhaustion and subnet management issues. Now, our large and complex use cases, like gateways and machine learning workloads, are running smoothly on the Class-E address space across multiple regions.” - Mahmoud Ragab, Software Engineering Manager, Snap
Configuring clusters with Class E
New clusters
Requirement
Create VPC-native clusters.
Steps
-
Create a subnetwork with secondary ranges for pods (and optionally for services). The secondary ranges can use CIDRs from the Class E range (240.0.0.0/4).
-
Use the above created secondary ranges in the cluster creation for pod and services CIDR ranges. This is the user-managed secondary range assignment method as shown here.
-
Optionally, configure IP masquerading to source network address translation (SNAT) Pod IP address to the underlying node's IP address.
Migrating clusters
Requirement
The clusters should be VPC-native clusters.
Steps
-
The cluster’s default pod ipv4 range cannot be updated. You can however add pod ranges for newer node pools that can use Class E ranges.
-
You can optionally migrate the workloads from older node pools to newer node pools (which use Class E for pods).
Transitioning from Class E IPv4 to IPv6
Transitioning to dual-stack clusters using Class E IPv4 addresses and IPv6 addresses now is a smart strategic move for organizations facing IP exhaustion. It provides immediate relief by expanding the available pool of IP addresses, enabling scalability and growth within Google Cloud and GKE. Moreover, adopting dual-stack clusters is a crucial first step towards a smoother transition to IPv6 only.
Learn more about networking in GKE
To learn more about best practices in GKE Networking, start here.