Jump to Content
Containers & Kubernetes

65,000 nodes and counting: Google Kubernetes Engine is ready for trillion-parameter AI models

November 13, 2024
https://storage.googleapis.com/gweb-cloudblog-publish/images/65k_nodes_v1.max-2500x2500.jpg
Drew Bradstock

Senior Director of Product, Cloud Runtimes

Maciek Różacki

Group Product Manager, Google Kubernetes Engine

Join us at Google Cloud Next

Early bird pricing available now through Feb 14th.

Register

As generative AI evolves, we're beginning to see the transformative potential it is having across industries and our lives. And as large language models (LLMs) increase in size — current models are reaching hundreds of billions of parameters, and the most advanced ones are approaching 2 trillion — the need for computational power will only intensify. In fact, training these large models on modern accelerators already requires clusters that exceed 10,000 nodes. 

With support for 15,000-node clusters — the world’s largest — Google Kubernetes Engine (GKE) has the capacity to handle these demanding training workloads. Today, in anticipation of even larger models, we are introducing support for 65,000-node clusters.

With support for up to 65,000 nodes, we believe GKE offers more than 10X larger scale than the other two largest public cloud providers.

Unmatched scale for training or inference

Scaling to 65,000 nodes provides much-needed capacity to the world’s most resource-hungry AI workloads. Combined with innovations in accelerator computing power, this will enable customers to reduce model training time or scale models to multi-trillion parameters or more. Each node is equipped with multiple accelerators (e.g., Cloud TPU v5e node with four chips), giving the ability to manage over 250,000 accelerators in one cluster.

To develop cutting-edge AI models, customers need to be able to allocate computing resources across diverse workloads. This includes not only model training but also serving, inference, conducting ad hoc research, and managing auxiliary tasks. Centralizing computing power within the smallest number of clusters provides customers the flexibility to quickly adapt to changes in demand from inference serving, research and training workloads. 

With support for 65,000 nodes, GKE now allows running five jobs in a single cluster, each matching the scale of Google Cloud's previous world record for the world’s largest training job for LLMs.

Customers on the cutting edge of AI welcome these developments. Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems, and is excited for GKE’s expanded scale.

“GKE’s new support for larger clusters provides the scale we need to accelerate our pace of AI innovation.” - James Bradbury, Head of Compute, Anthropic

Innovations under the hood

This achievement is made possible by a variety of enhancements: For one, we are transitioning GKE from the open-source etcd, distributed key-value store, to a new, more robust, key-value store based on Spanner, Google’s distributed database that delivers virtually unlimited scale. On top of the ability to support larger GKE clusters, this change will usher in new levels of reliability for GKE users, providing improved latency of cluster operations (e.g., cluster startup and upgrades) and a stateless cluster control plane. By implementing the etcd API for our Spanner-based storage, we help ensure backward compatibility and avoid having to make changes in core Kubernetes to adopt the new technology.

In addition, thanks to a major overhaul of the GKE infrastructure that manages the Kubernetes control plane, GKE now scales significantly faster, meeting the demands of your deployments with fewer delays. This enhanced cluster control plane delivers multiple benefits, including the ability to run high-volume operations with exceptional consistency. The control plane now automatically adjusts to these operations, while maintaining predictable operational latencies. This is particularly important for large and dynamic applications such as SaaS, disaster recovery and fallback, batch deployments, and testing environments, especially during periods of high churn.

We’re also constantly innovating on IaaS and GKE capabilities to make Google Cloud the best place to build your AI workloads. Recent innovations in this space include: 

  • Secondary boot disk, which provides faster workload startups through container image caching

  • Fully managed DCGM metrics for improved accelerator monitoring

  • Hyperdisk ML, a high-performance storage solution for scalable applications that is now generally available

  • Serverless GPUs, now available in Cloud Run

  • Custom compute classes, which offer greater control over compute resource allocation and scaling

  • Support for Trillium, our sixth-generation TPU, the most performant and most energy-efficient TPU to date 

  • Support for A3 Ultra VM powered by NVIDIA H200 Tensor Core GPUs with our new Titanium ML network adapter, which delivers non-blocking 3.2 Tbps of GPU-to-GPU traffic with RDMA over Converged Ethernet (RoCE). A3 Ultra VMs will be available in preview next month.

A continued commitment to open source

Guided by Google's long-standing and robust open-source culture, we make substantial contributions to the open-source community, including when it comes to scaling Kubernetes. With support for 65,000-node clusters, we made sure that all necessary optimizations and improvements for such scale are part of the core open-source Kubernetes.

Our investments to make Kubernetes the best foundation for AI platforms go beyond scalability. Here is a sampling of our contributions to the Kubernetes project over the past two years:

  • Drove a major overhaul of the Job API

  • Incubated the K8S Batch Working Group to build a community around research, HPC and AI workloads, producing tools like Kueue.sh, which is becoming the de facto standard for job queueing on Kubernetes 

  • Created the JobSet operator that is being integrated into the Kubeflow ecosystem to help run heterogenous jobs (e.g., driver-executer)

  • For multihost inference use cases, created the Leader Worker Set controller

  • Published a highly optimized internal model server of JetStream 

  • Incubated the Kubernetes Serving Working Group, which is driving multiple efforts including model metrics standardization, Serving Catalog and Inference Gateway

At Google Cloud, we’re dedicated to providing the best platform for running containerized workloads, consistently pushing the boundaries of innovation. These new advancements allow us to support the next generation of AI technologies. For more, listen to the Kubernetes podcast, where Maciek Rozacki and Wojtek Tyczynski join host Kaslin Fields to talk about GKE’s support for 65,000 nodes. You can also see a demo on 65,000 nodes on a single GKE cluster here.

Posted in