Put your containers on autopilot and securely run your enterprise workloads at scale—with little to no K8s expertise required.
New customers can use $300 in free credits to try out GKE.
With the new premium GKE Enterprise edition, platform teams benefit from increased velocity by configuring and observing multiple clusters from one place, defining configuration for teams rather than clusters, and providing self-service options for developers for deployment and management of apps. You can reduce risk using advanced security and GitOps-based configuration management. Lower total cost of ownership (TCO) with a fully integrated and managed solution—adding up to a 196% ROI in three years.
GKE Standard edition provides fully automated cluster life cycle management, pod and cluster autoscaling, cost visibility, and automated infrastructure cost optimization. It includes all the existing benefits of GKE and offers both the Autopilot and Standard operation modes. The new premium GKE Enterprise edition offers all of the above, plus management, governance, security, and configuration for multiple teams and clusters—all with a unified console experience and integrated service mesh.
GKE Autopilot is a hands-off operations mode that manages your cluster’s underlying compute (without you needing to configure or monitor)—while still delivering a complete Kubernetes experience. And with per-pod billing, Autopilot ensures you pay only for your running pods, not system components, operating system overhead, or unallocated capacity for up to 85% savings from resource and operational efficiency. Both Autopilot and Standard operations mode are available as part of the GKE Enterprise edition.
Privately networked clusters in GKE can be restricted to a private endpoint or a public endpoint that only certain address ranges can access. GKE Sandbox for the Standard mode of operation provides a second layer of defense between containerized workloads on GKE for enhanced workload security. GKE clusters inherently support Kubernetes Network Policy to restrict traffic with pod-level firewall rules.
GKE implements full Kubernetes API, four-way autoscaling, release channels, multi-cluster support, and scales up to 15,000 nodes. Horizontal pod autoscaling can be based on CPU utilization or custom metrics. Cluster autoscaling works on a per-node-pool basis, and vertical pod autoscaling continuously analyzes the CPU and memory usage of pods, automatically adjusting CPU and memory requests.
Get access to enterprise-ready containerized solutions with prebuilt deployment templates, featuring portability, simplified licensing, and consolidated billing. These are not just container images, but open source, Google-built, and commercial applications that increase developer productivity. Click to deploy on-premises or in third-party clouds from Google Cloud Marketplace.
GKE supports GPUs and TPUs and makes it easy to run ML, GPGPU, HPC, and other workloads that benefit from specialized hardware accelerators.
Use fleets to organize clusters and workloads, and assign resources to multiple teams easily to improve velocity and delegate ownership. Team scopes let you define subsets of fleet resources on a per-team basis, with each scope associated with one or more fleet member clusters.
Backup for GKE is an easy way for customers running stateful workloads on GKE to protect, manage, and restore their containerized applications and data.
GKE runs Certified Kubernetes, enabling workload portability to other Kubernetes platforms across clouds and on-premises. You can also run your apps anywhere with consistency using GKE on Google Cloud, GKE on AWS, or GKE on Azure.
Take advantage of Kubernetes and cloud technology in your own data center through Google Distributed Cloud. Get the GKE experience with quick, managed, and simple installs as well as upgrades validated by Google.
Manage, observe, and secure your services with Google’s implementation of the powerful Istio open source project. Simplify traffic management and monitoring with a fully managed service mesh.
Create and enforce consistent configurations and security policies across clusters, fleets, and teams with managed GitOps config deployment.
Control access in the cluster with your Google accounts and role permissions.
Reserve an IP address range for your cluster, allowing your cluster IPs to coexist with private network IPs via Google Cloud VPN.
GKE is backed by a Google security team of over 750 experts and is both HIPAA and PCI DSS compliant.
Enable Cloud Logging and Cloud Monitoring with simple checkbox configurations, making it easy to gain insight into how your application is running.
Choose clusters tailored to the availability, version stability, isolation, and pod traffic requirements of your workloads.
Automatically scale your application deployment up and down based on resource utilization (CPU, memory).
Automatically keep your cluster up to date with the latest release version of Kubernetes.
When auto repair is enabled, if a node fails a health check, GKE initiates a repair process for that node.
Kubernetes allows you to specify how much CPU and memory (RAM) each container needs, which is used to better organize workloads within your cluster.
Use GKE Sandbox for a second layer of defense between containerized workloads on GKE for enhanced workload security.
GKE isn't just for 12-factor apps. You can attach persistent storage to containers, and even host complete databases.
GKE supports the common Docker container format.
GKE runs on Container-Optimized OS, a hardened OS built and managed by Google.
Integrating with Google Container Registry makes it easy to store and access your private Docker images.
Use Cloud Build to reliably deploy your containers on GKE without needing to set up authentication.
Google Cloud console offers useful dashboards for your project's clusters and their resources. You can use these dashboards to view, inspect, manage, and delete resources in your clusters.
Affordable compute instances suitable for batch jobs and fault-tolerant workloads. Spot VMs provide significant savings of up to 91% while still getting the same performance and capabilities as regular VMs.
Durable, high-performance block storage for container instances. Data is stored redundantly for integrity, flexibility to resize storage without interruption, and automatic encryption. You can create persistent disks in HDD or SSD formats. You can also take snapshots of your persistent disk and create new persistent disks from that snapshot.
GKE offers always encrypted, local, solid-state drive (SSD) block storage. Local SSDs are physically attached to the server that hosts the virtual machine instance for very high input/output operations per second (IOPS) and very low latency compared to persistent disks.
Global load-balancing technology helps you distribute incoming requests across pools of instances across multiple regions, so you can achieve maximum performance, throughput, and availability at low cost.
Fully supported for both Linux and Windows workloads, GKE can run both Windows Server and Linux nodes.
Run stateless serverless containers abstracting away all infrastructure management and automatically scale them with Cloud Run.
Fine-grained visibility to your Kubernetes clusters. See your GKE clusters' resource usage broken down by namespaces and labels, and attribute it to meaningful entities.
Release channels provide more control over which automatic updates a given cluster receives, based on the stability requirements of the cluster and its workloads. You can choose rapid, regular, or stable. Each has a different release cadence and targets different types of workloads.
Verify, enforce, and improve security of infrastructure components and packages used for container images with Artifact Analysis.
Google bills in second-level increments. You pay only for the compute time that you use.
How It Works
A GKE cluster has a control plane and machines called nodes. Nodes run the services supporting the containers that make up your workload. The control plane decides what runs on those nodes, including scheduling and sca