Creating clusters
-
Creating a zonal cluster
Learn how to create a zonal cluster.
-
Creating a regional cluster
Learn how to create a regional cluster to increase availability of the cluster's control plane and workloads during cluster upgrades, automated maintenance, or a zonal disruption.
-
Creating an Autopilot cluster
Learn how to set up an Autopilot cluster, where Google manages the cluster's underlying infrastructure including nodes and node pools.
-
Creating a private cluster
Learn how to set up a private cluster.
-
Creating an alpha cluster
Learn how to create an alpha cluster, a cluster with Kubernetes alpha features enabled in Google Kubernetes Engine.
-
Create clusters and node pools with Arm nodes (Preview)
Learn how to create GKE clusters with Arm node pools.
-
Creating a cluster using Windows Server node pools
Learn how to create a cluster where you can use Windows Server containers.
Administering clusters
-
Cluster administration overview
Learn the basics of administering your GKE clusters.
-
Managing clusters
Learn how to view your clusters, set a default cluster for command-line tools, and change a cluster's zones.
-
Understanding cluster resource usage
Track the usage of cluster resources such as CPU, GPU, memory, network egress, and storage.
-
Install kubectl and configure cluster access
Learn how to configure cluster access for kubectl.
-
Manually upgrading a cluster or node pool
Learn about upgrading the Kubernetes version running on your cluster or its nodes.
-
Resizing a cluster
Learn how to change the number of nodes in your cluster or node pool.
-
Autoscaling a cluster
Learn how to automatically scale a cluster.
-
Viewing cluster autoscaler events
Learn how the cluster autoscaler emits visibility events, and how to view the logged events.
-
Deleting a cluster
Learn how to delete a cluster and clean up your GKE environment.
-
Adding and managing node pools
Learn about adding new node pools and managing existing ones in your clusters.
-
Migrate your container runtime to containerd
Learn how to migrate your Docker node image types to containerd.
-
Customizing node system configuration
Learn how to apply advanced configuration options to the `kubelet` and `sysctl` in your node pools.
-
Applying updates to existing node pools
Learn how how to dynamically update the network tags, node labels and node taints of an existing node pool.
-
Use network tags to apply firewall rules to nodes
Learn how how to use network tags to apply firewall rules and routes to specific nodes and node pools.
-
Using Compute Engine sole-tenant nodes in GKE
Learn how to use sole-tenant nodes in GKE.
-
Consuming reserved zonal resources
Learn how to reserve Compute Engine instances in a specific zone.
-
Using node auto-provisioning
Learn how to use node auto-provisioning to automatically create and delete node pools.
-
Specifying a node image
Learn how to run a specific node image on your nodes.
-
Auto-upgrading nodes
Learn how to configure your nodes to automatically upgrade to the latest version of Kubernetes.
-
Receive cluster notifications
Learn how to receive notifications for cluster events.
-
Verifying node upgrades and quota
Learn how to verify your node upgrades.
-
Auto-repairing nodes
Learn how to automatically repair your nodes.
-
Observing your GKE clusters
Learn how to observe your GKE clusters using monitoring dashboards.
-
View detailed breakdown of cluster costs (Preview)
Learn how to view a breakdown of your GKE cluster costs by cluster name, namespace, and label.
-
View cost-related optimization metrics (preview)
Learn how to view cost-related optimization metrics for your clusters and workloads.
-
Configuring node pool upgrade strategies
Learn how to configure node pool upgrade strategies for your GKE cluster node pools.
-
Configuring maintenance windows and exclusions
Learn how to use maintenance windows and exclusions to control when automatic maintenance, such as cluster and node auto-upgrades, can and cannot occur on your clusters.
-
Creating and managing cluster labels
Learn how to organize your Google Cloud clusters with cluster labels.
-
Create and manage Tags in GKE
Learn how to use Tags to conditionally apply policies to your GKE clusters.
-
Configuring Windows Server nodes to automatically join an AD domain
Learn how to configure your Windows Server nodes in your GKE cluster to automatically join an Active Directory domain.
-
Fleets management overview
Learn how to use multi-cluster management capabilities and get started managing your fleet.
-
Create fleets to simplify multi-cluster management
Learn how fleet creation works, with details for different cluster types.
Configuring and expanding clusters
-
Running GPUs
Learn how to run GPUs in your clusters and node pools.
-
Running multi-instance GPUs
Learn how to partition a GPU to share across containers.
-
Share GPUs with multiple workloads using time-sharing
Learn how to share physical GPUs with multiple containers using time-sharing.
-
Choose compute classes for Autopilot Pods
Learn how to choose specialized compute configurations to run Autopilot Pods that have specific hardware requirements.
-
Deploy Autopilot workloads on Arm architecture
Learn how to configure your Autopilot workloads to request nodes that are backed by Arm architecture.
-
Choosing a minimum CPU platform
Learn how to choose a minimum CPU platform for your clusters and nodes.
-
Reducing add-on resource usage in smaller clusters
Learn how to conserve cluster resources in small clusters by fine-tuning cluster add-ons.
-
Configuring a custom boot disk
Learn how to customize a node boot disk.
-
Run fault-tolerant workloads at lower costs using Autopilot Spot Pods
Learn how to run fault-tolerant workloads at lower costs in Autopilot Spot Pods.
-
Use Spot VMs to run fault-tolerant workloads (Preview)
Learn how to run fault-tolerant workloads at lower prices on Spot VMs.
-
Use preemptible VMs to run fault-tolerant workloads
Learn how to run fault-tolerant workloads at lower prices on preemptible VMs.
-
Configure simultaneous multi-threading (SMT)
Learn how to configure SMT to change the number of vCPUs on your physical cores.
Deploying workloads to clusters
-
Overview of deploying workloads on your cluster
Learn the basics of how to deploy different types of applications, jobs, and other workloads on your cluster.
-
Deploying a stateless Linux application
Learn how to deploy a stateless Linux application on your cluster.
-
Deploying a stateless Windows Server application
Learn how to deploy a stateless Windows Server application.
-
Deploying a stateful application
Learn how to deploy a stateful application on your cluster.
-
Deploying an application from Cloud Marketplace
Learn how to deploy an application from Google Cloud Marketplace to your cluster.
-
Running a job
Learn how to run a finite or batch job on your cluster.
-
Running a CronJob
Learn how to run a timed or time-sensitive job on your cluster.
-
Scaling an application
Learn how to scale the number of running replicas of your application, either manually or automatically.
-
Configuring horizontal Pod autoscaling
Learn how to autoscale a Deployment using different types of metrics.
-
Scale container resource requests and limits
Learn how to update CPU and memory requests for containers.
-
Configuring multidimensional Pod autoscaling
Learn how to combine elements of horizontal and vertical Pod autoscaling.
-
Performing rolling updates
Learn how to perform rolling upgrades, which can update your running applications without downtime.
-
Define compact placement for GKE nodes (Preview)
Learn how to control whether your nodes are physically located relative to each other within a zone by using a compact placement policy.
-
Controlling scheduling with node taints
Learn how to use the GKE node taints feature to help control where your workloads are scheduled.
-
Use Image streaming to pull container images
Learn how to improve workload initialization times by streaming container image data as your applications request it.
-
Managing applications with Application Delivery (Beta)
Use Application Delivery and private Git repositories to manage and deploy applications on GKE.
-
Setting up automated deployments
Learn how to configure automated deployments for your workloads.
-
Prepare an Arm workload for deployment (Preview)
Learn how to prepare a workload to be scheduled on Arm nodes in a GKE cluster.
-
Build multi-architecture images for Arm workloads (Preview)
Learn about multi-architecture (multi-arch) images, why they are useful for deploying Arm workloads to GKE clusters, how to check a container image for Arm readiness, and how to build a multi-arch image.
-
Run fault-tolerant workloads at lower costs with Spot VMs
Learn how to run fault-tolerant workloads at lower prices on Spot VMs.
-
Use preemptible VMs to run fault-tolerant workloads
Learn how to run fault-tolerant workloads at lower prices on preemptible VMs.
Configuring cluster storage
-
Creating volumes
Learn how to create a Deployment where each Pod contains one or more volumes.
-
Manually installing a CSI driver
Learn how to install a Container Storage Interface (CSI) storage driver.
-
Using the Compute Engine persistent disk CSI Driver
Learn how to automatically deploy and manage the Compute Engine persistent disk CSI Driver.
-
Using the Filestore CSI driver
Learn how to automatically deploy and manage the Filestore CSI driver.
-
Use the Filestore CSI Driver with Shared VPC
Learn how to use the Filestore CSI driver on shared VPC network.
-
Using persistent disks with multiple readers
Learn how to format and mount a disk for multiple readers.
-
Using SSD persistent disks
Learn how to use persistent disks backed by SSDs in your cluster.
-
Using preexisting persistent disks as PersistentVolumes
Learn how to add a preexisting persistent disk to your cluster.
-
Provisioning regional persistent disks
Learn how to dynamically or manually provision regional persistent disks to replicate data between two zones in the same region.
-
Using local SSDs
Learn how to use local SSDs to provide high-performance, ephemeral storage to nodes in your cluster.
-
Using volume expansion (Beta)
Learn how to use volume expansion to increase the size of your volume after its creation.
-
Using volume snapshots
Learn how to use volume snapshots to create a copy of your persistent volume.
-
Create clones of persistent volumes
Learn how to use volume cloning to create a duplicate of your existing persistent volume.
-
Use customer-managed encryption keys
Learn how to manage encryption for disks using keys in Cloud KMS.
-
Accessing Filestore fileshares
Learn how to access a Filestore fileshare by creating a persistent volume and persistent volume claim.
-
Using SMB CSI driver to access SMB volume on Windows Server nodes
Learn how to use the open source SMB CSI Driver for Kubernetes to access a NetApp Cloud Volumes Service SMB volume on a GKE cluster with Windows server nodes.
Configuring cluster networking
-
Creating a routes-based cluster
Learn how to set up IP ranges for a routes-based cluster.
-
Creating a VPC-native cluster
Learn how to set up IP aliasing on your GKE cluster.
-
Setting up intranode visibility
Learn how to make all Pod-to-Pod communication visible to the Google Cloud network.
-
Optimizing IP address allocation
Learn how to optimize how IP addresses are allocated to nodes by configuring the maximum number of Pods per node.
-
Adding Pod IP address ranges
Learn how to use discontiguous multi-Pod CIDR to add Pod IP address ranges to clusters.
-
Setting up clusters with Shared VPC
Learn how to set up GKE clusters that use Shared VPC.
-
Creating a network policy
Learn how to enforce a Kubernetes network policy on your GKE cluster.
-
Using GKE Dataplane V2
Learn how to use GKE Dataplane V2 with your GKE clusters.
-
Using network policy logging
Learn how to use network policy logging to record connections allowed and denied by network policies.
-
Use Egress NAT Policy to configure IP masquerade in Autopilot clusters
Learn how to configure Egress NAT Policy to configure IP masquerade on your Autopilot cluster. This allows GKE to change the source IP addresses of packets sent from Pods.
-
Configuring an IP masquerade agent
Learn how to configure an IP masquerade agent on your GKE cluster to allow multiple clients to access a destination using a single IP address.
-
Increase network traffic speed for GPU nodes
Learn how to enable higher network bandwidth for GPU nodes.
-
Using Cloud DNS for GKE (Preview)
Learn how to use Cloud DNS for GKE.
-
Using kube-dns
Learn how GKE implements service discovery using kube-dns.
-
Setting up NodeLocal DNSCache
Learn how to set up local DNS caches on your cluster nodes.
-
Setting up a custom kube-dns Deployment
Learn how to set up a custom Deployment of kube-dns.
-
Configuring TCP/UDP load balancing
Learn how to configure Services of type LoadBalancer.
-
Exposing applications using Services
Learn how to expose your application to network traffic from outside your cluster.
-
Using an internal load balancer
Learn how to set up an internal load balancer on your GKE cluster.
-
Configuring Ingress for external load balancing
Learn how to configure an external HTTP(S) load balancer by creating an Ingress object.
-
Configuring Ingress for internal load balancing
Learn how to configure an internal HTTP(S) load balancer by creating an Ingress object.
-
Container-native load balancing through Ingress
Learn how to set up container-native load balancing.
-
Using Google-managed SSL certificates
Learn how to use Ingresses to create external load balancers with Google-managed SSL certificates.
-
Using multiple SSL certificates in HTTPS load balancing with Ingress
Learn how to use multiple SSL certificates with your cluster's HTTP(S) load balancer.
-
Using HTTP/2 for load balancing with Ingress
Learn how to configure an HTTP(S) load balancer to use HTTP/2.
-
Use a custom Ingress controller
Learn how to configure a custom Ingress controller with GKE.
-
Deploying Gateways (Preview)
Learn how to use the Gateway API to configure load balancing and routing.
-
GatewayClass capabilities (Preview)
Learn capabilities of the GatewayClasses provided by GKE.
-
Configuring Ingress features
Learn about the features available for your Ingress controller and how to configure these features through parameters.
-
Container-native load balancing with standalone zonal NEGs
Learn how to use network endpoint groups independent of Ingress.
-
Configuring multi-cluster Services
Learn how to use multi-cluster Service to discover and invoke Services across multiple GKE clusters.
-
Enabling multi-cluster Gateways (Preview)
Learn how to enable your environment for multi-cluster load balancing.
-
Deploying multi-cluster Gateways (Preview)
Learn how to route and shift traffic to applications across multiple GKE clusters.
-
Setting up Multi Cluster Ingress
Create and register the clusters needed to configure Multi Cluster Ingress.
-
Deploying Ingress across clusters
Learn how to deploy Ingress across multiple clusters.
-
Troubleshooting and operations for Multi Cluster Ingress
Learn how to troubleshoot problems with the Anthos Ingress controller.
Configuring cluster security
-
Harden your cluster's security
Follow best practices for hardening your cluster.
-
Automatically scan workloads for configuration issues (Preview)
Automatically scan workload configurations in your GKE clusters for potential security issues.
-
Creating Cloud IAM policies
Learn how to create Identity and Access Management policies for users and service accounts.
-
Use Workload Identity
Learn how to grant your workloads access to Google Cloud APIs.
-
Configure role-based access control
Learn how to create roles and grant them access to specific resources in namespaces or clusters.
-
Configure Google Groups for RBAC
Learn how to create and configure Groups in Google Workspace which you can then use to grant RBAC permissions in your clusters.
-
Authenticating to the Kubernetes API server
Learn about the supported authentication methods when connecting to the Kubernetes API server in GKE.
-
Use external identity providers to authenticate to GKE
Learn how to configure an external identity provider for cluster authentication.
-
Use Kubernetes service accounts
Learn about Kubernetes service accounts and how and when to use them in GKE.
-
Migrating from legacy access scopes
Learn how to migrate your pre-Kubernetes 1.10 clusters from access scopes to IAM for authentication.
-
Add authorized networks for control plane access
Learn how to set up authorized networks on your GKE cluster.
-
Encrypt secrets at the application layer
Learn how to encrypt Kubernetes Secrets at the application layer.
-
Use customer-managed encryption keys (CMEK)
Learn how to manage encryption for disks using keys in Cloud KMS.
-
Rotate your control plane IP
Learn how to rotate the IP address for your cluster's API server.
-
Rotate your cluster credentials
Learn how to rotate the credentials for your cluster's API server.
-
Accessing audit logs
Learn how to use Kubernetes audit logging in your cluster.
-
Enabling Linux auditd logs on GKE nodes
Learn how to enable verbose operating system audit logs.
-
Apply security controls at the Pod level (Preview)
Learn how to apply Pod-level security controls using the PodSecurity admission controller.
-
Migrate from PodSecurityPolicy to PodSecurity (Preview)
Learn how to migrate your existing PodSecurityPolicy configuration to the PodSecurity admission controller.
-
Applying Pod security policies using Gatekeeper
Learn the recommended way to apply Pod-level security controls to your clusters.
-
Using PodSecurityPolicies (Deprecated)
Learn how to use PodSecurityPolicies to restrict the capabilities of Pods in your clusters.
-
Isolate your workloads in dedicated node pools
Learn how to isolate your workloads from system Pods to reduce the risk of privilege escalation attacks.
-
Harden workload isolation with GKE Sandbox
Learn how to protect the host kernel on your cluster's nodes.
-
Enabling and configuring OS Login in GKE
Learn how to enable OS Login and configure an organization policy to enforce OS Login for private GKE clusters and nodes.
-
Protecting cluster metadata
Learn how to protect instance metadata on your cluster's nodes.
-
Using Shielded GKE Nodes
Learn how to run your cluster nodes on Shielded VMs.
-
Encrypt workload data in-use with Confidential Google Kubernetes Engine Nodes
Learn how to enable encryption-in-use for your workloads.
-
Mitigating security incidents
Learn actions you can take to mitigate potential ongoing security incidents.
Using Config Connector for Kubernetes
-
Config Connector overview
Learn how to manage Google Cloud resources using Kubernetes tooling and APIs.
-
Installing, upgrading, and uninstalling Config Connector
Learn how to install Config Connector on your cluster.
-
Getting started with Config Connector
Learn how to enable the Cloud Storage API and create and manage Cloud Storage buckets.