This page describes how GKE Sandbox protects the host kernel on your nodes when containers in the Pod execute unknown or untrusted code. For example, multi-tenant clusters such as software-as-a-service (SaaS) providers often execute unknown code submitted by their users.
GKE Sandbox uses gVisor, an open source project. This topic broadly discusses gVisor, but you can learn more details by reading the official gVisor documentation.
For detailed information on how to enable GKE Sandbox, see Configure GKE Sandbox.
GKE Sandbox provides an extra layer of security to prevent untrusted code from affecting the host kernel on your cluster nodes. Before discussing how GKE Sandbox works, it's useful to understand the nature of the potential risks it helps mitigate.
A container runtime such as
containerd provides some degree of
isolation between the container's processes and the kernel running on the node.
However, the container runtime often runs as a privileged user on the node and
has access to most system calls into the host kernel.
Multi-tenant clusters and clusters whose containers run untrusted workloads are more exposed to security vulnerabilities than other clusters. Examples include SaaS providers, web-hosting providers, or other organizations that allow their users to upload and run code. A flaw in the container runtime or in the host kernel could allow a process running within a container to "escape" the container and affect the node's kernel, potentially bringing down the node.
The potential also exists for a malicious tenant to gain access to and exfiltrate another tenant's data in memory or on disk, by exploiting such a defect.
Finally, an untrusted workload could potentially access other Google Cloud services or cluster metadata.
How GKE Sandbox mitigates these threats
gVisor is a userspace re-implementation of the Linux kernel API that does not
need elevated privileges. In conjunction with a container runtime such as
containerd , the userspace kernel re-implements the majority of system calls
and services them on behalf of the host kernel. Direct access to the host kernel
is limited. See the
gVisor architecture guide
for detailed information about how this works. From the container's point of
view, gVisor is nearly transparent, and does not require any changes to the
When you enable GKE Sandbox on a node pool, a sandbox is created for each Pod running on a node in that node pool. In addition, nodes running sandboxed Pods are prevented from accessing other Google Cloud services or cluster metadata.
Each sandbox uses its own userspace kernel. With this in mind, you can make decisions about how to group your containers into Pods, based on the level of isolation you require and the characteristics of your applications.
GKE Sandbox is an especially good fit for the following types of applications. See Limitations for more information to help you decide which applications to sandbox.
- Untrusted or third-party applications using runtimes such as Rust, Java, Python, PHP, Node.js, or Golang
- Web server front-ends, caches, or proxies
- Applications processing external media or data using CPUs
- Machine-learning workloads using CPUs
- CPU-intensive or memory-intensive applications
Additional security recommendations
When using GKE Sandbox, we recommend that you also follow these recommendations:
Specify resource limits on all containers running in a sandbox. This protects against the risk of a defective or malicious application starving the node of resources and negatively impacting other applications or system processes running on the node.
If you are using Workload Identity, block cluster metadata access using Network Policy to block access to
169.254.169.254. This protects against the risk of a malicious application accessing information to potentially private data like project ID, node name and zone, etc.
GKE Sandbox works well with many applications, but not all. This section provides more information about the current limitations of GKE Sandbox.
Node pool configuration
- You cannot use GKE Sandbox on Windows Server node pools.
- You cannot enable GKE Sandbox on the default node pool.
- When using GKE Sandbox, your cluster must have at least two node pools. You must always have at least one node pool where GKE Sandbox is disabled. This node pool must contain at least one node, even if all your workloads are sandboxed.
- GKE versions earlier than 1.24.2-gke.300 don't support the e2-micro, e2-small, and e2-medium machine types. GKE version 1.24.2-gke.300 and later support these machine types.
- Nodes must use the Container-Optimized OS with containerd (
cos_containerd) node image.
Access to cluster metadata
- Nodes running sandboxed Pods are prevented from accessing cluster metadata at the level of the operating system on the node.
- You can run regular Pods on a node with GKE Sandbox enabled. However, by default those regular Pods cannot access Google Cloud services or cluster metadata.
- Use Workload Identity to grant Pods access to Google Cloud services.
SMT may be disabled
Simultaneous multithreading (SMT) settings are used to mitigate side channel vulnerabilities that take advantage of threads sharing core state, such as Microarchitectural Data Sampling (MDS) vulnerabilities.
Starting in GKE version 1.24.2-gke.300, SMT is configured by machine type based on how vulnerable the machine is to MDS, as follows:
Machine types with Intel processors: SMT disabled by default.
Machine types without Intel processors: SMT enabled by default.
Machine types with only one thread per core: no SMT support. All requested vCPUs visible.
Prior to version 1.24.2-gke.300, SMT is disabled on all machine types.
You can enable SMT if it's disabled on your selected machine type. You're charged for every vCPU, regardless of whether you turn SMT on or keep it turned off. For pricing information, refer to the Compute Engine pricing.
GKE version 1.24.2-gke.300 and later
--threads-per-core flag when
creating a GKE Sandbox node pool:
gcloud container node-pools create smt-enabled \ --cluster=CLUSTER_NAME \ --machine-type=MACHINE_TYPE \ --threads-per-core=2 \ --sandbox type=gvisor
CLUSTER_NAME: the name of the existing cluster.
MACHINE_TYPE: the machine type.
For more information about
--threads-per-core, refer to
Set the number of threads per core.
GKE versions before 1.24.2-gke.300
Create a new node pool in your cluster with the node label
gcloud container node-pools create smt-enabled \ --cluster=CLUSTER_NAME \ --machine-type=MACHINE_TYPE \ --node-labels=cloud.google.com/gke-smt-disabled=false \ --image-type=cos_containerd \ --sandbox type=gvisor
Deploy the DaemonSet to the node pool. The DaemonSet will only run on nodes with the
kubectl create -f \ https://raw.githubusercontent.com/GoogleCloudPlatform/k8s-node-tools/master/disable-smt/gke/enable-smt.yaml
Ensure that the DaemonSet pods are in the running state.
kubectl get pods --selector=name=enable-smt -n kube-system
The output is similar to the following:
NAME READY STATUS RESTARTS AGE enable-smt-2xnnc 1/1 Running 0 6m
SMT has been enabledappears in the logs of the pods.
kubectl logs enable-smt-2xnnc enable-smt -n kube-system
By default, the container is prevented from opening raw sockets, to reduce the
potential for malicious attacks. Certain network-related tools such as
tcpdump create raw sockets as part of their core functionality. To enable
raw sockets, you must explicitly add the
NET_RAW capability to the
container's security context:
spec: containers: - name: my-container securityContext: capabilities: add: ["NET_RAW"]
Untrusted code running inside the sandbox may be allowed to reach external services such as database servers, APIs, other containers, and CSI drivers. These services are running outside the sandbox boundary and need to be individually protected. An attacker can try to exploit vulnerabilities in these services to break out of the sandbox. You must consider the risk and impact of these services being reachable by the code running inside the sandbox, and apply the necessary measures to secure them.
This includes file system implementations for container volumes such as ext4 and CSI drivers. CSI drivers run outside the sandbox isolation and may have privileged access to the host and services. An exploit in these drivers can affect the host kernel and compromise the entire node. We recommend that you run the CSI driver inside a container with the least amount of permissions required, to reduce the exposure in case of an exploit. GKE Sandbox supports using the Compute Engine Persistent Disk CSI driver.
You can't use GKE Sandbox with the following Kubernetes features:
- Accelerators such as GPUs or TPUs
- Monitoring statistics at the level of the Pod or container
- Hostpath storage
- CPU and memory limits are only applied for Guaranteed Pods and Burstable Pods, and only when CPU and memory limits are specified for all containers running in the Pod.
Pods using PodSecurityPolicies that specify host namespaces, such as
Pods using PodSecurityPolicy settings such as privileged mode
Istio is only supported on clusters running GKE version 1.18.6-gke.4801 and later.
FSGroup is supported in GKE version 1.22 and later.
Imposing an additional layer of indirection for accessing the node's kernel comes with performance trade-offs. GKE Sandbox provides the most tangible benefit on large multi-tenant clusters where isolation is important. Keep the following guidelines in mind when testing your workloads with GKE Sandbox.
Workloads that generate a large volume of low-overhead system calls, such as a large number of small I/O operations, may require more system resources when running in a sandbox, so you may need to use more powerful nodes or add additional nodes to your cluster.
Direct access to hardware or virtualization
If your workload needs any of the following, GKE Sandbox might not be a good fit because it prevents direct access to the host kernel on the node:
- Direct access to the node's hardware
- Kernel-level virtualization features
- Privileged containers