Deploy a Kubernetes Cluster and Update Production Code in Seconds
Now it's your turn. Type commands into the following terminal emulator and learn how to create a Kubernetes cluster on Kubernetes Engine.
- Identity & Access Management
- Control access in the cluster with your Google accounts and role permissions.
- Hybrid Networking
- Reserve an IP address range for your cluster, allowing your cluster IPs to coexist with private network IPs via Google Cloud VPN.
- Security and Compliance
- Kubernetes Engine is backed by Google security team of over 750 experts and is both HIPAA and PCI DSS 3.1 compliant.
- Integrated Logging & Monitoring
- Enable Stackdriver Logging and Stackdriver Monitoring with simple checkbox configurations, making it easy to gain insight into how your application is running.
- Auto Scale
- Automatically scale your application deployment up and down based on resource utilization (CPU, memory).
- Auto Upgrade
- Automatically keep your cluster up to date with the latest release version of Kubernetes. Kubernetes release updates are quickly made available within Kubernetes Engine.
- Auto Repair
- When auto repair is enabled, if a node fails a health check Kubernetes Engine initiates a repair process for that node.
- Resource Limits
- Kubernetes allows you to specify how much CPU and memory (RAM) each Container needs, which is used to better organize workloads within your cluster.
- Container Isolation
- Use GKE Sandbox for a second layer of defense between containerized workloads on Google Kubernetes Engine (GKE) for enhanced workload security.
- Stateful Application Support
- Kubernetes Engine isn't just for 12-factor apps. You can attach persistent storage to containers, and even host complete databases.
- Docker Image Support
- Kubernetes Engine supports the common Docker container format.
- Fully Managed
- Kubernetes Engine clusters are fully managed by Google Site Reliability Engineers (SREs), ensuring your cluster is available and up-to-date.
- OS Built for Containers
- Kubernetes Engine runs on Container-Optimized OS, a hardened OS built and managed by Google.
- Private Container Registry
- Integrating with Google Container Registry makes it easy to store and access your private Docker images.
- Fast Consistent Builds
- Use Google Cloud Build to reliably deploy your containers on Kubernetes Engine without needing to setup authentication.
- Workload Portability, on-premises and cloud
- Kubernetes Engine runs Certified Kubernetes, enabling workload portability to other Kubernetes platforms across clouds and on-premises.
- GPU support
- Kubernetes Engine supports GPU and makes it easy to run ML, GPGPU, HPC, and other workloads that benefit from specialized hardware accelerators.
- Built-in dashboard
- Cloud Console offers useful dashboards for your project's clusters and their resources. You can use these dashboards to view, inspect, manage, and delete resources in your clusters.
“Kubernetes Engine delivers a high-performing, flexible infrastructure that lets us independently scale components for maximum efficiency.”--George Yianni, inventor of Hue and head of technology-connected lighting for the home at Signify.
“Google Kubernetes Engine is like magic for us. It's the best container environment there is. Without it, we couldn't provide the advanced financial analytics we offer today. Scaling would be difficult and prohibitively expensive.”--Michael Bishop, CTO and Co-Founder, Alpha Vertex
“Google Kubernetes Engine provides us with the openness, stability and scalability we need to manage and orchestrate our Docker containers. This year, our customers flourished during Black Friday and Cyber Monday with zero outages, downtime or interruptions in service thanks, in part, to Google Kubernetes Engine.”--Will Warren | Chief Technology Officer, GroupBy
“By moving to Google Cloud Platform and Kubernetes Engine, we were able to scale quickly from 15 services to 350 services while reducing our cloud hosting costs by approximately 60%.”--Pablo Moncada | IT DevOps Team Lead, BQ
Cloud Run for AnthosBeta
Cloud Run for Anthos provides a simpler developer experience for deploying stateless services in Anthos. Cloud Run for Anthos abstracts away Kubernetes concepts while providing automatic scaling based on HTTP requests, scaling to zero instances, automatic networking, and integration with Stackdriver. It gives you serverless on your own terms, with access to custom machine types, VPC networks, GPU accelerators, and the ability to run side by side with other workloads in Anthos. Cloud Run for Anthos is compatible with Knative and provides a consistent experience which enables you to run your serverless workloads anywhere: on Google Cloud or on-premises.Learn more
A product listed on this page is in beta. For more information on our product launch stages, see here.