Jump to Content
Containers & Kubernetes

A decade of Kubernetes leadership: why Google Cloud should be your choice for Kubernetes

November 2, 2023
Drew Bradstock

Senior Director of Product, Cloud Runtimes

Gari Singh

Product Manager

Kubernetes has become a critical part of the modern software development landscape. Originally developed by Google, it is now the second largest open source project in history, with over 83,000 unique contributors over the past decade, and is the de facto standard for running containerized applications in production.

Kubernetes has also helped to democratize the cloud, making it possible for businesses of all sizes to take advantage of the cloud with the benefits of containerization. A powerful and flexible platform that can run a wide variety of applications, Kubernetes is used by companies of all sizes and powers some of the world's largest and most complex applications. More recently, with the explosion of generative AI and large language models (LLMs), companies are turning to Kubernetes to run and scale complex and compute-intensive machine learning platforms.

The success of Kubernetes is a testament to the power of open-source software. Kubernetes is a radically open, community-first project. Tens of thousands of developers from across the globe contribute to it, enhancing its capabilities and adapting it to new use cases. As a result, Kubernetes continues to evolve at a pace that is only possible through open source.

Open-sourcing Kubernetes expanded opportunities for an entire industry

Kubernetes was born at Google and released as open source in 2014. Its roots trace back to Google’s internal Borg system (introduced between 2003 and 2004), which powers everything from Google Search to Maps to YouTube. On average, Google launches more than 4 billion containers a week!

Open-sourcing Kubernetes was a revolutionary move. It spawned the Cloud Native Computing Foundation (CNCF) and fostered a community of contributors and users around the world. As this global community continues to grow, Google’s commitment to Kubernetes is stronger than ever, acting as a steward and providing consistent leadership to ensure its continued growth.

Today, Google is the largest contributor to Kubernetes with over one million contributions — that’s more than the next four organizations combined. In addition to investing time and development resources, Google Cloud also donates millions of dollars per year to support the infrastructure needed to host Kubernetes containers and build and test each release.

Looking strictly at cloud providers over the past year, Google Cloud has made three times the number of contributions as the next closest provider:

Our contributions to and engagements in Kubernetes are far-reaching:

  • Co-chairing and acting as technical leads for many core Special Interest Groups (SIGs) including API Machinery, Autoscaling, Networking, Scheduling, and Storage.
  • Identifying and resolving complex problems that impact both the community and Google's customers. For example, Google has invested heavily with the community on improving upgrades and deprecations for all of Kubernetes, which has helped provide a much more stable platform for all customers.
  • Fixing over half of the security vulnerabilities that have been found in Kubernetes. This is a significant contribution to Kubernetes security, and demonstrates Google's commitment to keeping Kubernetes secure for users.
  • Working closely with Googlers who work on Go to keep Kubernetes secure with updated Go versions. The Go team is responsible for developing the Go programming language, which is used to write Kubernetes code. Googlers work closely with the Go team to ensure that Kubernetes is compatible with the latest Go versions, and to fix any security vulnerabilities that are found in Go.
  • Leading the development of Pod Security Standards, a set of best practices for securing Kubernetes pods. Googlers have been leading the development of these standards, and have published a number of guides and resources to help users secure their Kubernetes pods.
  • Creating the initial Container Storage Interface (CSI) specification, defining how containers can access storage. Googlers were involved in the early development of CSI, and they helped to create the initial specification. CSI is now widely used by open source and commercial storage vendors.
  • Creating the Common Expression Language (CEL) for expressing queries and transformations on structured data. CEL is used in a variety of Kubernetes components, including Validating Admission Policy and Custom Resource Validation Expressions. CEL is a powerful and flexible language that has helped to improve the extensibility and usability of Kubernetes.

Google's contributions to Kubernetes have been significant and have helped make the platform more robust, scalable, secure, and reliable. Moreover, Google continues to push Kubernetes forward into new domains such as batch processing and machine learning, with contributions to CNCF such as job queueing with Kueue and ML operations and workflows with Kubeflow. These contributions matter; if the Kubernetes community is thriving, it’s thanks to a core group of individuals and companies actually investing their time in the critical “chopping wood and carrying water” tasks and building new functionality from which everyone can benefit. For Kubernetes to continue to be a great platform for new workloads such as AI/ML, we need more companies who benefit from Kubernetes to do their part and contribute.

Why customers trust Google Kubernetes Engine for mission-critical workloads

Google Kubernetes Engine (GKE) is the most scalable and fully automated Kubernetes service available. It is a popular choice for businesses of all sizes and industries, and is used to host some of the world's largest and most complex applications. With GKE, you can be confident that your applications are running on a reliable and scalable platform that is backed by Google Cloud's expertise. GKE now includes multi-cluster and distributed team management, policy enforcement with Policy Controller, GitOps-based configuration with Config Sync, self-service provisioning of your Google Cloud Resources with Config Controller, and a fully managed Istio-powered service mesh. All of these new capabilities are integrated with GKE Enterprise and are ideal for customers getting started with Kubernetes or those already deployed globally.

Customers use GKE to run mission-critical applications for a variety of reasons:

  • Who better to operate and manage your environment than the team that created Kubernetes? The entire open source Kubernetes project is built, tested, and distributed on Google Cloud, and we use GKE for several services including Vertex AI and DeepMind.
  • GKE is a Leader in the 2023 Gartner Magic Quadrant for Container Management.
  • It accelerates and efficiently scales AI/ML workloads with GPU time-sharing and Cloud TPUs.
  • GKE offers the first fully-managed, serverless Kubernetes experience with GKE Autopilot, a hands-off mode of operation that manages the underlying compute infrastructure while providing the full power of the Kubernetes API and being backed by a pod-level SLA and Google’s renowned SRE team.
  • It scales to meet the needs of even the largest and most demanding applications with unparalleled 15,000 node clusters. For instance, PGS replaced its Cray with a GKE-based supercomputer capable of 72.02 petaFLOPS.
  • GKE delivers enterprise-grade security with features such as GKE Security Posture to scan for misconfigured workloads and container image vulnerabilities, network policy enforcement with built-in Kubernetes Network Policy, GKE Sandbox for isolating untrusted workloads, and Confidential Nodes for encrypting workload data in use.
  • Seamless automatic upgrades with fine-grained controls such as blue-green upgrades and maintenance windows and exclusions.
  • Flexible deployment options to meet business, regulatory and/or compliance needs and requirements. These include Google Distributed Cloud, to extend Google Cloud to customer data centers or edge locations with fully managed hardware and software deployment options; multi-cloud deployment to AWS and Azure; and the ability to attach and manage any CNCF-compliant Kubernetes cluster.
  • Google Cloud has expertise in running cost-optimized applications, including publishing the inaugural State of Kubernetes Cost Optimization Report.
  • We release new minor versions of GKE approximately 30 days after the release of the corresponding open source version, ensuring that GKE users have access to the latest security patches and features as soon as possible.

If you are looking for a scalable, reliable, and fully automated Kubernetes service to run everything from microservices to databases to the most-demanding generative AI workloads, then GKE is the right choice for you.

Join us at KubeCon + CloudNativeCon North America 2023

If you plan to be at KubeCon, we’d love to meet with you. You can check out all of our plans here, but here are a few highlights:

You can also stop by booth #D2 to see demos, lightning talks or simply meet with our GKE and Kubernetes experts and engineers. And if you can’t make it this year, you can check out our exclusive preview on-demand.

Posted in