Security Bulletins

From time to time, we might release security bulletins related to Kubernetes Engine. All security bulletins for Google Kubernetes Engine are described here.

Vulnerabilities are often kept secret under embargos until affected parties have had a chance to address them. In these cases, GKE's Release Notes will refer to "security updates" until the embargo has been lifted. At that point the notes will be updated to reflect the vulnerability the patch addressed.

Subscribe to the GKE security bulletins. Subscribe

December 3, 2018

Description Severity Notes

Kubernetes recently discovered a new security vulnerability CVE-2018-1002105, allowing a user with relatively low privileges to bypass authorization to the kubelet's APIs, giving the ability to execute arbitrary operations for any Pod on any node in the cluster. For further details, see the Kubernetes disclosure. All Google Kubernetes Engine (GKE) masters were affected by these vulnerabilities, and we have already upgraded clusters to the latest patch versions. No action is required.

What should I do?

No action is required. GKE masters have already been upgraded.

This patch is available in GKE 1.9.7-gke.11, 1.10.6-gke.11, 1.10.7-gke.11, 1.10.9-gke.5, and 1.11.2-gke.18 and newer releases.

What vulnerability is addressed by this patch?

The patch mitigates the following vulnerability:

The vulnerability CVE-2018-1002105 allows a user with relatively low privileges to bypass authorization to the kubelet's APIs. This gives a user authorized to make upgradable requests to escalate and make arbitrary calls to the kubelet's API. This is rated as a Critical vulnerability in Kubernetes. Given some details in GKE's implementation that prevented the unauthenticated escalation path, this is rated as a High vulnerability.

High CVE-2018-1002105

November 13, 2018

Description

2018-11-16 Update: The revocation and rotation of all potentially impacted tokens is complete. No further action is required.

Google recently discovered an issue in the Calico Container Network Interface (CNI) plugin which can, in certain configurations, log sensitive information. This issue is tracked under Tigera Technical Advisory TTA-2018-001.

  • When running with debug-level logging, the Calico CNI plugin will write Kubernetes API client configuration into the logs.
  • The Calico CNI will also write the Kubernetes API token to the logs at the info level if the "k8s_auth_token" field is set on the CNI network configuration.
  • Additionally, when running with debug-level logging, if the service account token is explicitly set, either in the Calico configuration file read by Calico, or as environment variables used by Calico, then Calico components (calico/node, felix, CNI) will write this information to the log files.

These tokens have the following permissions:

 
bgpconfigurations.crd.projectcalico.org     [create get list update watch]
bgppeers.crd.projectcalico.org              [create get list update watch]
clusterinformations.crd.projectcalico.org   [create get list update watch]
felixconfigurations.crd.projectcalico.org   [create get list update watch]
globalbgpconfigs.crd.projectcalico.org      [create get list update watch]
globalfelixconfigs.crd.projectcalico.org    [create get list update watch]
globalnetworkpolicies.crd.projectcalico.org [create get list update watch]
globalnetworksets.crd.projectcalico.org     [create get list update watch]
hostendpoints.crd.projectcalico.org         [create get list update watch]
ippools.crd.projectcalico.org               [create get list update watch]
networkpolicies.crd.projectcalico.org       [create get list update watch]
nodes                                       [get list update watch]
pods                                        [get list watch patch]
namespaces                                  [get list watch]
networkpolicies.extensions                  [get list watch]
endpoints                                   [get]
services                                    [get]
pods/status                                 [update]
networkpolicies.networking.k8s.io           [watch list]
      

Google Kubernetes Engine Clusters with a Cluster Network Policy and Stackdriver Logging enabled, logged Calico service account tokens to Stackdriver. Clusters without Network Policy enabled are unaffected.

We have deployed a fix that migrates the Calico CNI plugin to only log at the warning level, and use a new service account. The patched calico code will be deployed in a later release.

Over the course of the next week, we will perform a rolling revocation of any potentially impacted tokens. This bulletin will be updated when the revocation is complete. No further action on your part is required. (This rotation was completed on 2018-11-16)

If you wish to rotate these tokens immediately you can run the following command, the new secret for the service account should be re-created automatically within a few seconds:

kubectl get sa --namespace kube-system calico -o template --template '{{(index .secrets 0).name}}' | xargs kubectl delete secret --namespace kube-system
      

Detection

GKE logs all access to the API server. To determine if a Calico token was used from outside Google Cloud's expected IP range, you can run the following Stackdriver query. Please note this will only return records for calls made from outside GCP's network. You should customize it as needed for your specific environment.

resource.type="k8s_cluster"
protoPayload.authenticationInfo.principalEmail="system:serviceaccount:kube-system:calico"
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "8.34.208.0/20")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "8.35.192.0/21")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "8.35.200.0/23")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "108.59.80.0/20")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "108.170.192.0/20")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "108.170.208.0/21")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "108.170.216.0/22")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "108.170.220.0/23")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "108.170.222.0/24")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.224.0.0/13")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "162.216.148.0/22")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "162.222.176.0/21")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "173.255.112.0/20")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "192.158.28.0/22")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "199.192.112.0/22")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "199.223.232.0/22")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "199.223.236.0/23")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "23.236.48.0/20")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "23.251.128.0/19")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.204.0.0/14")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.208.0.0/13")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "107.167.160.0/19")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "107.178.192.0/18")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "146.148.2.0/23")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "146.148.4.0/22")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "146.148.8.0/21")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "146.148.16.0/20")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "146.148.32.0/19")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "146.148.64.0/18")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.203.0.0/17")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.203.128.0/18")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.203.192.0/19")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.203.240.0/20")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "130.211.8.0/21")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "130.211.16.0/20")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "130.211.32.0/19")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "130.211.64.0/18")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "130.211.128.0/17")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "104.154.0.0/15")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "104.196.0.0/14")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "208.68.108.0/23")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.184.0.0/14")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.188.0.0/15")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.202.0.0/16")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.190.0.0/17")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.190.128.0/18")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.190.192.0/19")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.235.224.0/20")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.192.0.0/14")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.196.0.0/15")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.198.0.0/16")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.199.0.0/17")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.199.128.0/18")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.200.0.0/15")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "2600:1900::/35")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.190.224.0/20")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.232.0.0/15")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.234.0.0/16")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.235.0.0/17")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.235.192.0/20")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.236.0.0/14")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.240.0.0/15")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.203.232.0/21")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "130.211.4.0/22")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.220.0.0/14")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.242.0.0/15")
NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.244.0.0/14")
      

August 14, 2018

Description Severity Notes

Intel has disclosed the following CVEs:

These CVEs are collectively referred to as "L1 Terminal Fault (L1TF)".

These L1TF vulnerabilities exploit speculative execution by attacking the configuration of processor-level data structures. "L1" refers to the Level-1 Data cache (L1D), a small on-core resource used to accelerate memory access.

Read the Google Cloud blog post for more details on these vulnerabilities and Compute Engine's mitigations.

Google Kubernetes Engine impact

The infrastructure that runs Kubernetes Engine and isolates customer Clusters and Nodes from each other is protected against known attacks.

Kubernetes Engine node pools that use Google's Container-Optimized OS image, and who have auto-upgrade enabled, will be automatically updated to patched versions of our COS image as they become available starting the week of 2018-08-20.

Kubernetes Engine node pools that do not have auto-upgrade enabled must manually upgrade as patched versions of our COS image become available.

High

August 6, 2018; last updated: September 5th, 2018

Description Severity Notes

2018-09-05 Update

CVE-2018-5391 was recently disclosed. As with CVE-2018-5390, this is a kernel-level networking vulnerability that increases the effectiveness of denial of service (DoS) attacks against vulnerable systems. The main difference is that CVE-2018-5391 is exploitable over IP connections. We updated this bulletin to cover both of these vulnerabilities.

Description

CVE-2018-5390 ("SegmentSmack") describes a kernel-level networking vulnerability that increases the effectiveness of denial of service (DoS) attacks against vulnerable systems over TCP connections.

CVE-2018-5391 ("FragmentSmack") describes a kernel-level networkingvulnerability that increases the effectiveness of denial of service (DoS) attacks against vulnerable systems over IP connections.

Google Kubernetes Engine impact

As of 2018-08-11, all Kubernetes Engine masters are protected against both vulnerabilities. Also, all Kubernetes Engine clusters that are configured to automatically upgrade are also protected against both vulnerabilities. Kubernetes Engine node pools that are not configured to automatically upgrade, and were last manually upgraded before 2018-08-11, are affected by both vulnerabilities.

Patched versions

Due to the severity of this vulnerability, we recommend you manually upgrade your nodes as soon as the patch becomes available.

High

May 30, 2018

Description Severity Notes

A vulnerability was recently discovered in Git which may allow escalation of privileges in Kubernetes if unprivileged users are allowed to create Pods with gitRepo volumes. The CVE is identified with the tag CVE-2018-11235.

Am I affected?

This vulnerability affects you if all of the following are true:

  • Untrusted users can create Pods (or trigger Pod creation).
  • Pods created by untrusted users have restrictions preventing host root access (for example, via PodSecurityPolicy).
  • Pods created by untrusted users are allowed to use the gitRepo volume type.

All Kubernetes Engine nodes are vulnerable.

What should I do?

Forbid the use of the gitRepo volume type. To forbid gitRepo volumes with PodSecurityPolicy, omit gitRepo from the volumes whitelist in your PodSecurityPolicy.

Equivalent gitRepo volume behavior can be achieved by cloning a git repository into an EmptyDir volume from an initContainer:

apiVersion: v1
kind: Pod
metadata:
  name: git-repo-example
spec:
  initContainers:
    # This container clones the desired git repo to the EmptyDir volume.
    - name: git-clone
      image: alpine/git # Any image with git will do
      args:
        - clone
        - --single-branch
        - --
        - https://github.com/kubernetes/kubernetes # Your repo
        - /repo # Put it in the volume
      securityContext:
        runAsUser: 1 # Any non-root user will do. Match to the workload.
        allowPrivilegeEscalation: false
        readOnlyRootFilesystem: true
      volumeMounts:
        - name: git-repo
          mountPath: /repo
  containers:
    ...
  volumes:
    - name: git-repo
      emptyDir: {}

What patch addresses this vulnerability?

A patch will be included in an upcoming Kubernetes Engine release. Please check back here for details.

Medium

May 21, 2018

Description Severity Notes

Several vulnerabilities were recently discovered in the Linux kernel which may allow escalation of privileges or denial of service (via kernel crash) from an unprivileged process. These CVEs are identified with tags CVE-2018-1000199, CVE-2018-8897, and CVE-2018-1087. All Kubernetes Engine nodes are affected by these vulnerabilities, and we recommend that you upgrade to the latest patch version, as detailed below.

What should I do?

In order to upgrade, you must first upgrade your master to the newest version. This patch is available in Kubernetes Engine 1.8.12-gke.1, Kubernetes Engine 1.9.7-gke.1, and Kubernetes Engine 1.10.2-gke.1. These releases include patches for both Container-Optimized OS and Ubuntu images.

If you create a new cluster before then, you must specify the patched version for it to be used. Customers who have node auto-upgrades enabled and who do not manually upgrade will have their nodes upgraded to patched versions in the coming weeks.

What vulnerabilities are addressed by this patch?

The patch mitigates the following vulnerabilities:

CVE-2018-1000199: This vulnerability affects the Linux kernel. It allows an unprivileged user or process to crash the system kernel, leading to a DoS attack or privilege escalation. This is rated as a High vulnerability, with a CVSS of 7.8.

CVE-2018-8897: This vulnerability affects the Linux kernel. It allows an unprivileged user or process to crash the system kernel, leading to a DoS attack. This is rated as a Medium vulnerability, with a CVSS of 6.5.

CVE-2018-1087: This vulnerability affects Linux kernel's KVM hypervisor. This allows an unprivileged process to crash the guest kernel or potentially gain privileges. This vulnerability is patched in the infrastructure that Kubernetes Engine runs on, so Kubernetes Engine is unaffected. This is rated as a High vulnerability, with a CVSS score of 8.0.

High

March 12, 2018

Description Severity Notes

The Kubernetes project recently disclosed new security vulnerabilities, CVE-2017-1002101 and CVE-2017-1002102, allowing containers to access files outside the container. All Kubernetes Engine nodes are affected by these vulnerabilities, and we recommend that you upgrade to the latest patch version as soon as possible, as we detail below.

What should I do?

Due to the severity of these vulnerabilities, whether you have node auto-upgrades enabled or not, we recommend that you manually upgrade your nodes as soon as the patch becomes available. The patch will be available for all customers by March 16th, but it may be available for you sooner based on the zone your cluster is in, according to the release schedule.

In order to upgrade, you must first upgrade your master to the newest version. This patch will be available in Kubernetes 1.9.4-gke.1, Kubernetes 1.8.9-gke.1, and Kubernetes 1.7.14-gke.1. New clusters will use the patched version by default by March 30; if you create a new cluster before then, you must specify the patched version for it to be used.

Kubernetes Engine customers who have node auto-upgrades enabled and who do not manually upgrade will have their nodes upgraded to patched versions by April 23rd. However, due to the nature of the vulnerability, we recommend you manually upgrade your nodes as soon as the patch is available to you.

What vulnerabilities are addressed by this patch?

The patch mitigates the following two vulnerabilities:

The vulnerability CVE-2017-1002101 allows containers using subpath volume mounts to access files outside of the volume. This means that if you are blocking container access to hostpath volumes with PodSecurityPolicy, an attacker with the ability to update or create pods can mount any hostpath using any other volume type.

The vulnerability CVE-2017-1002102 allows containers using certain volume types - including secrets, config maps, projected volumes, or downward API volumes - to delete files outside of the volume. This means that if a container using one of these volume types is compromised, or if you allow untrusted users to create pods, an attacker could use that container to delete arbitrary files on the host.

To learn more about the fix, read the Kubernetes blog post.

High
Was this page helpful? Let us know how we did:

Send feedback about...

Kubernetes Engine