reduced risk, and lower TCO
With the new premium
GKE Enterprise edition,
platform teams benefit from increased velocity by
configuring and observing multiple clusters from one place,
defining configuration for teams rather than clusters, and
providing self-service options for developers for deployment
and management of apps. You can reduce risk using advanced
security and GitOps-based configuration management. Lower
total cost of ownership (TCO) with a fully integrated and
managed solution—adding up to a 196% ROI in three
GKE Standard edition
provides fully automated cluster life cycle management,
pod and cluster autoscaling, cost visibility, and
automated infrastructure cost optimization. It includes
all the existing benefits of GKE and offers both the
Autopilot and Standard operation modes. The new premium
GKE Enterprise edition
offers all of the above, plus management, governance,
security, and configuration for multiple teams and
clusters—all with a unified console experience and
integrated service mesh.
experience using Autopilot
is a hands-off operations mode that manages your cluster’s
underlying compute (without you needing to configure or
monitor)—while still delivering a complete Kubernetes
experience. And with
Autopilot ensures you pay only for your running pods, not
system components, operating system overhead, or unallocated
capacity for up to 85% savings from resource and operational
efficiency. Both Autopilot and Standard operations mode are
available as part of the GKE Enterprise edition.
networking and security
clusters in GKE can be restricted to a private endpoint or a
public endpoint that only certain address ranges can
access. GKE Sandbox for
the Standard mode of operation provides a second layer of
defense between containerized workloads on GKE for
enhanced workload security.
GKE clusters inherently support Kubernetes Network Policy to
restrict traffic with pod-level firewall rules.
Pod and cluster
GKE implements full
Kubernetes API, four-way autoscaling, release channels,
multi-cluster support, and
scales up to 15,000 nodes.
Horizontal pod autoscaling can be based on CPU utilization
or custom metrics. Cluster autoscaling works on a
per-node-pool basis, and vertical pod autoscaling
continuously analyzes the CPU and memory usage of pods,
automatically adjusting CPU and memory requests.
applications and templates
Get access to
enterprise-ready containerized solutions with prebuilt
deployment templates, featuring portability, simplified
licensing, and consolidated billing. These are not just
container images, but open source, Google-built, and
commercial applications that increase developer
productivity. Click to deploy on-premises or in third-party
Google Cloud Marketplace.
GPU and TPU support
GKE supports GPUs and
TPUs and makes it easy to run ML, GPGPU, HPC, and other
workloads that benefit from specialized hardware
Multi-team management via
fleet team scopes
to organize clusters and workloads, and assign resources to
multiple teams easily to improve velocity and delegate
ownership. Team scopes let you define subsets of fleet
resources on a per-team basis, with each scope associated
with one or more fleet member clusters.
You might choose
multiple clusters to separate services across environments,
tiers, locales, teams, or infrastructure providers.
Cloud components and features that support them
strive to make managing multiple clusters as easy as
Backup for GKE
Backup for GKE is
an easy way for customers running stateful workloads on GKE
to protect, manage, and restore their containerized
applications and data.
Multi-cloud support with
GKE runs Certified
Kubernetes, enabling workload portability to other
Kubernetes platforms across clouds and on-premises. You can
also run your apps anywhere with consistency using GKE on
Google Cloud, GKE on AWS, or GKE on Azure.
Take advantage of
Kubernetes and cloud technology in your own data center
Google Distributed Cloud.
Get the GKE experience with quick, managed, and simple
installs as well as upgrades validated by Google.
Managed service mesh
Manage, observe, and
secure your services with Google’s implementation of the
powerful Istio open source project. Simplify traffic
management and monitoring with a fully managed service
Create and enforce
consistent configurations and security policies across
clusters, fleets, and teams with managed GitOps config
Identity and access
Control access in the
cluster with your Google accounts and role
Reserve an IP address
range for your cluster, allowing your cluster IPs to coexist
with private network IPs via Google Cloud VPN.
Security and compliance
GKE is backed by a
Google security team of over 750 experts and is both HIPAA
and PCI DSS compliant.
Integrated logging and
Enable Cloud Logging
and Cloud Monitoring with simple checkbox configurations,
making it easy to gain insight into how your application is
tailored to the availability, version stability, isolation,
and pod traffic requirements of your workloads.
your application deployment up and down based on resource
utilization (CPU, memory).
Automatically keep your
cluster up to date with the latest release version of
When auto repair is
enabled, if a node fails a health check, GKE initiates a
repair process for that node.
Kubernetes allows you
to specify how much CPU and memory (RAM) each container
needs, which is used to better organize workloads within
GKE Sandbox for
a second layer of defense between containerized workloads on
GKE for enhanced workload security.
GKE isn't just for
12-factor apps. You can attach persistent storage to
containers, and even host complete databases.
Docker image support
GKE supports the common
Docker container format.
OS built for containers
GKE runs on
Container-Optimized OS, a hardened OS built and managed by
Integrating with Google
Container Registry makes it easy to store and access your
private Docker images.
Fast, consistent builds
Use Cloud Build to
reliably deploy your containers on GKE without needing to
set up authentication.
Google Cloud console
offers useful dashboards for your project's clusters and
their resources. You can use these dashboards to view,
inspect, manage, and delete resources in your
instances suitable for batch jobs and fault-tolerant
Spot VMs provide
significant savings of up to 91% while still getting the
same performance and capabilities as
Persistent disks support
high-performance block storage for container instances. Data
is stored redundantly for integrity, flexibility to resize
storage without interruption, and automatic encryption. You
create persistent disks in
HDD or SSD formats. You can also take snapshots of your
persistent disk and create new persistent disks from that
Local SSD support
GKE offers always
encrypted, local, solid-state drive (SSD) block storage.
Local SSDs are physically attached to the server that hosts
the virtual machine instance for very high input/output
operations per second (IOPS) and very low latency compared
to persistent disks.
Global load balancing
technology helps you distribute incoming requests across
pools of instances across multiple regions, so you can
achieve maximum performance, throughput, and availability at
Linux and Windows support
Fully supported for
both Linux and Windows workloads, GKE can run both Windows
Server and Linux nodes.
serverless containers abstracting away all infrastructure
management and automatically scale them
with Cloud Run.
to your Kubernetes clusters. See your GKE clusters' resource
usage broken down by namespaces and labels, and attribute it
to meaningful entities.
provide more control over which automatic updates a given
cluster receives, based on the stability requirements of the
cluster and its workloads. You can choose rapid, regular, or
stable. Each has a different release cadence and targets
different types of workloads.
Software supply chain
Verify, enforce, and
improve security of infrastructure components and packages
used for container images with Artifact Analysis.
Google bills in
second-level increments. You pay only for the compute time
that you use.