This page contains a historical archive of all security bulletins prior to 2020 for the following products:
To view the most recent security bulletins, see the Security bulletins page.
GKE security bulletins
November 14, 2019
Published: 2019-11-14Updated: 2019-11-14
Description | Severity | Notes |
---|---|---|
Kubernetes has disclosed a security issue in the kubernetes-csi
What should I do?
What vulnerabilities are addressed by this patch? |
Medium |
November 12, 2019
Published: 2019-11-12Updated: 2019-11-12
Description | Severity | Notes |
---|---|---|
Intel has disclosed CVEs that potentially allow interactions between speculative execution and microarchitectural state to expose data. For further details, see the Intel disclosure. The host infrastructure that runs Kubernetes Engine isolates customer workloads. Unless you are running untrusted code inside your own multi-tenant GKE clusters and using N2, M2 or C2 nodes, no further action is required. For GKE instances on N1 nodes, no new action is required. If you are running Google Distributed Cloud, exposure is hardware dependent. Please compare your infrastructure with the Intel disclosure. What should I do?You are only impacted if you are using node pools with N2, M2, or C2 nodes and those nodes run untrusted code inside your own multi-tenant GKE clusters.
Restarting your nodes applies the patch. The easiest way
to restart all nodes in your node pool is to use the
upgrade
operation to force a restart across all of the affected node pool. What vulnerabilities are addressed by this patch?The patch mitigates the following vulnerabilities: CVE-2019-11135: This CVE is also known as TSX Async Abort (TAA). TAA provides another avenue for data exfiltration using the same microarchitectural data structures that were exploited by Microarchitectural Data Sampling (MDS). CVE-2018-12207 This is a Denial of Service (DoS) vulnerability affecting virtual machine hosts allowing a malicious guest to crash an unprotected host. This CVE is also known as "Machine Check Error on Page Size Change." This does not affect GKE. |
Medium |
October 22, 2019
Published: 2019-10-22Updated: 2019-10-22
Description | Severity | Notes |
---|---|---|
A vulnerability was recently discovered in the Go programming language, described in CVE-2019-16276. This vulnerability potentially impacts Kubernetes configurations using an Authenticating Proxy. For further details, see the Kubernetes disclosure. Kubernetes Engine does not allow configuration of an Authenticating Proxy, so is not affected by this vulnerability. |
None |
October 16, 2019
Published: 2019-10-16Updated: 2019-10-24
Description | Severity | Notes |
---|---|---|
2019-10-24 Update: Patched versions are now available in all zones. A vulnerability was recently discovered in Kubernetes, described in CVE-2019-11253, which allows any user authorized to make POST requests to execute a remote Denial-of-Service attack on a Kubernetes API server. The Kubernetes Product Security Committee (PSC) released additional information on this vulnerability which can be found here. GKE Clusters that use Master Authorized Networks and Private clusters with no public endpoint mitigate this vulnerability. What should I do?We recommend that you upgrade your cluster to a patch version containing the fix as soon as they are available. We expect them to be available in all zones with the GKE release planned for the week of October 14th. The patch versions which will contain the mitigation are listed below:
What vulnerabilities are addressed by this patch?The patch mitigates the following vulnerabilities: CVE-2019-11253 is a Denial-of-Service (DoS) vulnerability. |
High |
September 16, 2019
Published: 2019-09-16Updated: 2019-10-16
Description | Severity | Notes |
---|---|---|
This bulletin has been updated since its original publication. The Go programming language recently discovered new security vulnerabilities CVE-2019-9512 and CVE-2019-9514, which are Denial of Service (DoS) vulnerabilities. In GKE, this could allow a user to craft malicious requests that consume excessive amounts of CPU in the Kubernetes API server, potentially reducing the availability of the cluster control plane. For further details, see the Go programming language disclosure. What should I do?We recommend that you upgrade your cluster to the latest patch version, which contains the mitigation to this vulnerability, as soon as they are available. We expect them to be available in all zones with the next GKE release, according to the release schedule. The patch versions which will contain the mitigation are listed below:
What vulnerability is addressed by this patch?The patch mitigates the following vulnerabilities: CVE-2019-9512 and CVE-2019-9514 are Denial of Service (DoS) vulnerabilities. |
High |
September 5, 2019
Published: 2019-09-05Updated: 2019-09-05
The bulletin for the fix for the vulnerability documented in the bulletin of May 31, 2019 is updated.
August 22, 2019
Published: 2019-08-22Updated: 2019-08-22
The bulletin for August 5, 2019 has been updated. The fix for the vulnerability documented in the earlier bulletin is available.
August 8, 2019
Published: 2019-08-08Updated: 2019-08-08
The bulletin for August 5, 2019 has been updated. We expect the fix for the vulnerability documented in that bulletin to be available in the next release of GKE.
August 5, 2019
Published: 2019-08-05Updated: 2019-08-09
Description | Severity | Notes |
---|---|---|
This bulletin has been updated since its original publication. Kubernetes recently discovered a vulnerability, CVE-2019-11247, which allows cluster-scoped custom resource instances to be acted on as if they were namespaced objects existing in all Namespaces. This means user and service accounts with only namespace-level RBAC permissions can interact with cluster-scoped custom resources. Exploiting this vulnerability requires the attacker to have privileges to access the resource in any namespace. What should I do?We recommend that you upgrade your cluster to the latest patch version, which contains the mitigation to this vulnerability, as soon as they are available. We expect them to be available in all zones with the next GKE release. The patch versions which will contain the mitigation are listed below:
What vulnerability is addressed by this patch?The patch mitigates the following vulnerability: CVE-2019-11247. |
Medium |
July 3, 2019
Published: 2019-07-03Updated: 2019-07-03
Description | Severity | Notes |
---|---|---|
A patched version of
Note: The patch is not available in |
High |
July 3, 2019
Published: 2019-06-25Updated: 2019-07-03
Description | Severity | Notes |
---|---|---|
July 3, 2019 UpdateAt the time of our last update, patches for versions 1.11.9 and 1.11.10 were not yet available. We have now released 1.11.10-gke.5 as an upgrade target for both 1.11 versions. At this time, GKE masters have been patched, and the Google infrastructure that runs Kubernetes Engine has been patched and is protected from this vulnerability. 1.11 masters will soon be deprecated and are scheduled to automatically upgrade to 1.12 the week of July 8, 2019. You may choose any of the following suggested actions to get nodes onto a patched version:
The original bulletin from June 24, 2019 follows: June 24, 2019 UpdateAs of 2019-06-22 21:40 UTC we have made the following patched Kubernetes versions available. Masters between Kubernetes versions 1.11.0 and 1.13.6 will be automatically updated to a patched version. If you are not running a version compatible with this patch, upgrade to a compatible master version (listed below) before upgrading your nodes. Due to the severity of these vulnerabilities, whether you have node auto-upgrade enabled or not, we recommend that you manually upgrade both your nodes and masters to one of these versions as soon as possible. The patched verions:
The original bulletin from June 18, 2019 follows: Netflix has recently disclosed three TCP vulnerabilities in Linux kernels: These CVEs are collectively referred to as NFLX-2019-001. Unpatched Linux kernels may be vulnerable to a remotely triggered denial of service attack. Google Kubernetes Engine nodes that send or receive untrusted network traffic are affected, and we recommend that you follow these mitigation steps below to protect your workloads. Kubernetes masters
Kubernetes nodesNodes that limit traffic to trusted networks are unaffected. This would be a cluster with the following:
Google is preparing a permanent mitigation for these vulnerabilities that will be made available as a new node version. We will update this bulletin and send an email to all GKE customers when the permanent fix is made available. Until the permanent fix is available, we've created a Kubernetes
DaemonSet that implements mitigations by modifying the host
What should I do?
Apply the Kubernetes DaemonSet to all nodes in your cluster by
running the following command. This adds an kubectl apply -f \ https://raw.githubusercontent.com/GoogleCloudPlatform\ /k8s-node-tools/master/drop-small-mss/drop-small-mss.yaml Once a patched node version is available and you have upgraded all potentially-affected nodes, you can remove the DaemonSet using the following command. Run the command once per cluster per Google Cloud project. kubectl delete -f \ https://raw.githubusercontent.com/GoogleCloudPlatform\ /k8s-node-tools/master/drop-small-mss/drop-small-mss.yaml |
High Medium Medium |
CVE-2019-11477 CVE-2019-11478 CVE-2019-11479 |
June 25, 2019
Description | Severity | Notes |
---|---|---|
2019-07-03 Update: This patch is available in
Note: The patch is not available in 1.11.10.
Kubernetes recently discovered a vulnerability, CVE-2019-11246, which
allows an attacker with access to a All
Google Kubernetes Engine (GKE) What should I do?
A patched version of Track the availability of this patch in the What vulnerability is addressed by this patch?
The vulnerability CVE-2019-11246 allows an attacker with access to a |
High |
June 18, 2019
Description | Severity | Notes |
---|---|---|
Docker recently discovered a vulnerability, CVE-2018-15664, that allows an attacker that can
execute code inside a container to hijack an externally-initiated
All Google Kubernetes Engine (GKE) nodes running Docker are affected by this vulnerability, and we recommend that you upgrade to the latest patch version once available. An upcoming patch version will include a mitigation for this vulnerability.
All Google Kubernetes Engine (GKE) masters older than version 1.12.7
are running Docker and are affected by this vulnerability.
On GKE, users do not have access to What should I do?
Only nodes running Docker are affected, and only when a
In order to upgrade your nodes, you must first upgrade your master to the patched version. When the patch is available, you can either initiate a master upgrade or wait for Google to auto-upgrade the master. The patch will be available in Docker 18.09.7, included in an upcoming GKE patch. This patch will only be available for GKE versions 1.13 and above. We will upgrade cluster masters to the patched version automatically, at the regular upgrade cadence. You can also initiate a master upgrade yourself after the patched version becomes available. We will update this bulletin with the versions containing a patch once available. Track the availability of these patches in the release notes. What vulnerability is addressed by this patch?The patch mitigates the following vulnerability:
The vulnerability CVE-2018-15664 allows an attacker that can execute
code inside a container to hijack an externally-initiated |
High |
May 31, 2019
Description | Severity | Notes |
---|---|---|
This bulletin has been updated since its original publication. August 2, 2019 UpdateAt the time of the initial bulletin, only 1.13.6-gke.0 through 1.13.6-gke.5 were impacted. Due to a regression all 1.13.6.x versions are now affected. If you are runnuing 1.13.6 upgrade to 1.13.7 as soon as possible.
The Kubernetes project has disclosed
CVE-2019-11245
in kubelet v1.13.6 and v1.14.2, which can cause containers to run as UID 0
(typically maps to the
If a non-root How do I know if I'm running an impacted version?Run the following command to list all nodes and their kubelet version: kubectl get nodes -o=jsonpath='{range .items[*]}'\ '{.status.nodeInfo.machineID}'\ '{"\t"}{.status.nodeInfo.kubeletVersion}{"\n"}{end}' If the output lists kubelet versions listed below, your nodes are impacted:
How do I know if my specific configuration is affected?If your containers run as a non-root user, and you are running node version 1.13.6-gke.0 through 1.13.6-gke.6 you are impacted except in the following cases:
What should I do?Set the RunAsUser Security Context on all Pods in the cluster that should not run as UID 0. You can apply this configuration using a PodSecurityPolicy. |
Medium | CVE-2019-11245 |
May 14, 2019
Description | Severity | Notes |
---|---|---|
2019-06-11 Update: The patch is available in 1.11.10-gke.4, 1.12.8-gke.6, and 1.13.6-gke.5 released the week of 2019-05-28, and newer releases. Intel has disclosed the following CVEs: These CVEs are collectively referred to as Microarchitectural Data Sampling (MDS). These vulnerabilities potentially allow data to be exposed via the interaction of speculative execution with microarchitectural state. For further details, see the Intel disclosure. The host infrastructure that runs Kubernetes Engine isolates customer workloads from each other. Unless you are running untrusted code inside your own multi-tenant GKE clusters, you are not impacted. For customers running untrusted code in their own multi-tenant services within Kubernetes Engine, this is a particularly severe vulnerability. To mitigate it in Kubernetes Engine, disable Hyper-Threading in your nodes. Only Google Kubernetes Engine (GKE) nodes using multiple CPUs are affected by these vulnerabilities. Note that n1-standard-1 (the GKE default), g1-small and f1-micro VMs only expose 1 vCPU to the guest environment so there is no need to disable Hyper-Threading. Additional protections to enable flush functionality will be included in an upcoming patch version. We will upgrade masters and nodes with auto-upgrade to the patched version automatically in the coming weeks, at the regular upgrade cadence. The patch alone is not sufficient to mitigate exposure to this vulnerability. See below for recommended actions. If you are running GKE on prem, you may be affected depending on the hardware you are using. Please refer to the Intel disclosure. What should I do?Unless you are running untrusted code inside your own multi-tenant GKE clusters, you are not impacted. For nodes in Kubernetes Engine, create new node pools with Hyper-Threading disabled and reschedule your workloads onto the new nodes. Note that n1-standard-1, g1-small, and f1-micro VMs only expose 1 vCPU to the guest environment so there is no need to disable Hyper-Threading. Warning:
To create a new node pool with Hyper-Threading disabled:
You must keep the DaemonSet running on the nodepools so that new nodes created in the pool will have the changes applied automatically. Node creations can be triggered by node auto repair, manual or auto upgrade, and auto-scaling. To re-enable Hyper-Threading, you will need to recreate the node pool without deploying the provided DaemonSet, and migrate your workloads to the new node pool. We also recommend that you manually upgrade your nodes once the patch becomes available. In order to upgrade, you must first upgrade your master to the newest version. GKE masters will automatically be upgraded at the regular upgrade cadence. We will update this bulletin with the versions containing a patch once available. What vulnerability is addressed by this patch?The patch mitigates the following vulnerabilities: CVE-2018-12126, CVE-2018-12127, CVE-2018-12130, CVE-2019-11091: These vulnerabilities exploit speculative execution. These CVEs are collectively referred to as Microarchitectural Data Sampling. These vulnerabilities potentially allow data to be exposed via the interaction of speculative execution with microarchitectural state. |
Medium |
April 5, 2019
Description | Severity | Notes |
---|---|---|
Recently, the security vulnerabilities CVE-2019-9900 and CVE-2019-9901. were discovered in Envoy. Istio embeds Envoy, and these vulnerabilities allow Istio policy to be bypassed in some cases. If you have enabled Istio on Google Kubernetes Engine (GKE), you may be affected by these vulnerabilities. We recommend upgrading your affected clusters to the latest patch version as soon as possible, and upgrading your Istio sidecars (instructions below). What should I do?
Due to the severity of these vulnerabilities, whether you
have node auto-upgrades enabled or not, we recommend that you:
The patched versions will be made available for all GKE projects before 7:00 PM PDT today. This patch will be available in the below GKE versions. New clusters will use the patched version by default when announced on the GKE security bulletins page, expected on April 15th, 2019; if you create a new cluster before then, you must specify the patched version for it to use. GKE customers who have node auto-upgrades enabled, and who do not manually upgrade, will have their nodes auto-upgraded to patched versions in the following week. Patched Versions:
What vulnerability is addressed by this patch?The patch mitigates the following vulnerabilities: CVE-2019-9900 and CVE-2019-9901. You can read more about them on the Istio blog. |
High |
March 1, 2019
Description | Severity | Notes |
---|---|---|
2019-03-22 Update: This patch is available in Kubernetes 1.11.8-gke.4, 1.13.4-gke.1, and newer releases. The patch is not yet available in 1.12. Track the availability of these patches in the release notes. Kubernetes recently discovered a new denial of service vulnerability CVE-2019-1002100, allowing a user authorized to make patch requests to craft a malicious "json-patch" request that consumes excessive amounts of CPU and memory in the Kubernetes API server, potentially reducing the availability of the cluster control plane. For further details, see the Kubernetes disclosure. All Google Kubernetes Engine (GKE) masters are affected by these vulnerabilities. An upcoming patch version will include a mitigation for this vulnerability. We will upgrade cluster masters to the patched version automatically in the coming weeks, at the regular upgrade cadence. What should I do?No action is required. GKE masters will automatically be upgraded at the regular upgrade cadence. If you wish to upgrade your master sooner, you can manually initiate a master upgrade. We will update this bulletin with the versions containing a patch. Note that the patch will only be available in versions 1.11+, not also in 1.10. What vulnerability is addressed by this patch?The patch mitigates the following vulnerability: The vulnerability CVE-2019-1002100 allows a user to specially craft a patch of type "json-patch" that consumes excessive amounts of CPU in the Kubernetes API server, potentially reducing the availability of the cluster control plane. |
Medium | CVE-2019-1002100 |
February 11, 2019 (runc)
Description | Severity | Notes |
---|---|---|
The Open Containers Initiative (OCI) recently discovered a new security vulnerability CVE-2019-5736 in runc, allowing container escape to obtain root privileges on the host node. Your Google Kubernetes Engine (GKE) Ubuntu nodes are affected by these vulnerabilities, and we recommend that you upgrade to the latest patch version as soon as possible, as we detail below. What should I do?In order to upgrade your nodes, you must first upgrade your master to the newest version. This patch is available in Kubernetes 1.10.12-gke.7, 1.11.6-gke.11, 1.11.7-gke.4, 1.12.5-gke.5 and newer releases. Track the availability of these patches in the release notes. Note that only Ubuntu nodes in GKE are affected. Nodes running COS are not affected. Note that the new version of runc has increased memory usage and may require updating memory allocated to containers if you have set low memory limits (< 16MB). What vulnerability is addressed by this patch?The patch mitigates the following vulnerability: CVE-2019-5736 describes a vulnerability in runc that allows a malicious container to (with minimal user interaction in the form of an exec) overwrite the host runc binary and thus gain root-level code execution on the host node. Containers not running as root are unaffected. This is rated as a 'High' severity vulnerability. |
High | CVE-2019-5736 |
February 11, 2019 (Go)
Description | Severity | Notes |
---|---|---|
2019-02-25 Update: The patch is not available in 1.11.7-gke.4 as previously communicated. If you are running 1.11.7, you can: downgrade to 1.11.6, upgrade to 1.12, or wait until the next 1.11.7 patch available the week of 2019-03-04. The Go programming language recently discovered a new security vulnerability CVE-2019-6486, which is a Denial of Service (DoS) vulnerability in the crypto/elliptic implementations of the P-521 and P-384 elliptic curves. In Google Kubernetes Engine (GKE), this could allow a user to craft malicious requests that consume excessive amounts of CPU in the Kubernetes API server, potentially reducing the availability of the cluster control plane. For further details, see the Go programming language disclosure. All Google Kubernetes Engine (GKE) masters are affected by these Vulnerabilities. The latest patch version Includes a mitigation for this vulnerability. We will upgrade cluster masters to the patched version automatically in the coming weeks, at the regular upgrade cadence. What should I do?No action is required. GKE masters will automatically be upgraded at the regular upgrade cadence. If you wish to upgrade your master sooner, you can manually initiate a master upgrade. This patch is available in GKE 1.10.12-gke.7, 1.11.6-gke.11, 1.11.7-gke.4, 1.12.5-gke.5, and newer releases. What vulnerability is addressed by this patch?The patch mitigates the following vulnerability: CVE-2019-6486 is a vulnerability in the crypto/elliptic implementations of the P-521 and P-384 elliptic curves. This allows a user to craft inputs that consume excessive amounts of CPU. |
High | CVE-2019-6486 |
December 3, 2018
Description | Severity | Notes |
---|---|---|
Kubernetes recently discovered a new security vulnerability CVE-2018-1002105, allowing a user with relatively low privileges to bypass authorization to the kubelet's APIs, giving the ability to execute arbitrary operations for any Pod on any node in the cluster. For further details, see the Kubernetes disclosure. All Google Kubernetes Engine (GKE) masters were affected by these vulnerabilities, and we have already upgraded clusters to the latest patch versions. No action is required. What should I do?No action is required. GKE masters have already been upgraded. This patch is available in GKE 1.9.7-gke.11, 1.10.6-gke.11, 1.10.7-gke.11, 1.10.9-gke.5, and 1.11.2-gke.18 and newer releases. What vulnerability is addressed by this patch?The patch mitigates the following vulnerability: The vulnerability CVE-2018-1002105 allows a user with relatively low privileges to bypass authorization to the kubelet's APIs. This gives a user authorized to make upgradable requests to escalate and make arbitrary calls to the kubelet's API. This is rated as a Critical vulnerability in Kubernetes. Given some details in GKE's implementation that prevented the unauthenticated escalation path, this is rated as a High vulnerability. |
High | CVE-2018-1002105 |
November 13, 2018
Description |
---|
2018-11-16 Update: The revocation and rotation of all potentially impacted tokens is complete. No further action is required. Google recently discovered an issue in the Calico Container Network Interface (CNI) plugin which can, in certain configurations, log sensitive information. This issue is tracked under Tigera Technical Advisory TTA-2018-001.
These tokens have the following permissions: |
bgpconfigurations.crd.projectcalico.org [create get list update watch] bgppeers.crd.projectcalico.org [create get list update watch] clusterinformations.crd.projectcalico.org [create get list update watch] felixconfigurations.crd.projectcalico.org [create get list update watch] globalbgpconfigs.crd.projectcalico.org [create get list update watch] globalfelixconfigs.crd.projectcalico.org [create get list update watch] globalnetworkpolicies.crd.projectcalico.org [create get list update watch] globalnetworksets.crd.projectcalico.org [create get list update watch] hostendpoints.crd.projectcalico.org [create get list update watch] ippools.crd.projectcalico.org [create get list update watch] networkpolicies.crd.projectcalico.org [create get list update watch] nodes [get list update watch] pods [get list watch patch] namespaces [get list watch] networkpolicies.extensions [get list watch] endpoints [get] services [get] pods/status [update] networkpolicies.networking.k8s.io [watch list] |
kubectl get sa --namespace kube-system calico -o template --template '{{(index .secrets 0).name}}' | xargs kubectl delete secret --namespace kube-system |
DetectionGKE logs all access to the API server. To determine if a Calico token was used from outside Google Cloud's expected IP range, you can run the following Stackdriver query. Please note this will only return records for calls made from outside GCP's network. You should customize it as needed for your specific environment. |
resource.type="k8s_cluster" protoPayload.authenticationInfo.principalEmail="system:serviceaccount:kube-system:calico" NOT ip_in_net(protoPayload.requestMetadata.callerIp, "8.34.208.0/20") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "8.35.192.0/21") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "8.35.200.0/23") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "108.59.80.0/20") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "108.170.192.0/20") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "108.170.208.0/21") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "108.170.216.0/22") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "108.170.220.0/23") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "108.170.222.0/24") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.224.0.0/13") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "162.216.148.0/22") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "162.222.176.0/21") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "173.255.112.0/20") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "192.158.28.0/22") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "199.192.112.0/22") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "199.223.232.0/22") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "199.223.236.0/23") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "23.236.48.0/20") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "23.251.128.0/19") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.204.0.0/14") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.208.0.0/13") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "107.167.160.0/19") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "107.178.192.0/18") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "146.148.2.0/23") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "146.148.4.0/22") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "146.148.8.0/21") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "146.148.16.0/20") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "146.148.32.0/19") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "146.148.64.0/18") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.203.0.0/17") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.203.128.0/18") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.203.192.0/19") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.203.240.0/20") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "130.211.8.0/21") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "130.211.16.0/20") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "130.211.32.0/19") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "130.211.64.0/18") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "130.211.128.0/17") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "104.154.0.0/15") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "104.196.0.0/14") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "208.68.108.0/23") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.184.0.0/14") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.188.0.0/15") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.202.0.0/16") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.190.0.0/17") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.190.128.0/18") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.190.192.0/19") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.235.224.0/20") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.192.0.0/14") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.196.0.0/15") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.198.0.0/16") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.199.0.0/17") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.199.128.0/18") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.200.0.0/15") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "2600:1900::/35") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.190.224.0/20") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.232.0.0/15") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.234.0.0/16") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.235.0.0/17") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.235.192.0/20") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.236.0.0/14") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.240.0.0/15") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.203.232.0/21") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "130.211.4.0/22") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.220.0.0/14") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.242.0.0/15") NOT ip_in_net(protoPayload.requestMetadata.callerIp, "35.244.0.0/14") |
August 14, 2018
Description | Severity | Notes |
---|---|---|
Intel has disclosed the following CVEs:
These CVEs are collectively referred to as "L1 Terminal Fault (L1TF)". These L1TF vulnerabilities exploit speculative execution by attacking the configuration of processor-level data structures. "L1" refers to the Level-1 Data cache (L1D), a small on-core resource used to accelerate memory access. Read the Google Cloud blog post for more details on these vulnerabilities and Compute Engine's mitigations. Google Kubernetes Engine impactThe infrastructure that runs Kubernetes Engine and isolates customer Clusters and Nodes from each other is protected against known attacks. Kubernetes Engine node pools that use Google's Container-Optimized OS image, and who have auto-upgrade enabled, will be automatically updated to patched versions of our COS image as they become available starting the week of 2018-08-20. Kubernetes Engine node pools that do not have auto-upgrade enabled must manually upgrade as patched versions of our COS image become available. |
High |
August 6, 2018; last updated: September 5th, 2018
Description | Severity | Notes |
---|---|---|
2018-09-05 UpdateCVE-2018-5391 was recently disclosed. As with CVE-2018-5390, this is a kernel-level networking vulnerability that increases the effectiveness of denial of service (DoS) attacks against vulnerable systems. The main difference is that CVE-2018-5391 is exploitable over IP connections. We updated this bulletin to cover both of these vulnerabilities. DescriptionCVE-2018-5390 ("SegmentSmack") describes a kernel-level networking vulnerability that increases the effectiveness of denial of service (DoS) attacks against vulnerable systems over TCP connections. CVE-2018-5391 ("FragmentSmack") describes a kernel-level networkingvulnerability that increases the effectiveness of denial of service (DoS) attacks against vulnerable systems over IP connections. Google Kubernetes Engine impactAs of 2018-08-11, all Kubernetes Engine masters are protected against both vulnerabilities. Also, all Kubernetes Engine clusters that are configured to automatically upgrade are also protected against both vulnerabilities. Kubernetes Engine node pools that are not configured to automatically upgrade, and were last manually upgraded before 2018-08-11, are affected by both vulnerabilities. Patched versionsDue to the severity of this vulnerability, we recommend you manually upgrade your nodes as soon as the patch becomes available. |
High |
May 30, 2018
Description | Severity | Notes |
---|---|---|
A vulnerability was recently discovered in Git which may allow escalation of privileges in Kubernetes if unprivileged users are allowed to create Pods with gitRepo volumes. The CVE is identified with the tag CVE-2018-11235. Am I affected?This vulnerability affects you if all of the following are true:
All Kubernetes Engine nodes are vulnerable. What should I do?
Forbid the use of the gitRepo volume type. To forbid gitRepo volumes
with PodSecurityPolicy, omit Equivalent gitRepo volume behavior can be achieved by cloning a git repository into an EmptyDir volume from an initContainer:
What patch addresses this vulnerability?A patch will be included in an upcoming Kubernetes Engine release. Please check back here for details. |
Medium |
May 21, 2018
Description | Severity | Notes |
---|---|---|
Several vulnerabilities were recently discovered in the Linux kernel which may allow escalation of privileges or denial of service (via kernel crash) from an unprivileged process. These CVEs are identified with tags CVE-2018-1000199, CVE-2018-8897, and CVE-2018-1087. All Kubernetes Engine nodes are affected by these vulnerabilities, and we recommend that you upgrade to the latest patch version, as detailed below. What should I do?In order to upgrade, you must first upgrade your master to the newest version. This patch is available in Kubernetes Engine 1.8.12-gke.1, Kubernetes Engine 1.9.7-gke.1, and Kubernetes Engine 1.10.2-gke.1. These releases include patches for both Container-Optimized OS and Ubuntu images. If you create a new cluster before then, you must specify the patched version for it to be used. Customers who have node auto-upgrades enabled and who do not manually upgrade will have their nodes upgraded to patched versions in the coming weeks. What vulnerabilities are addressed by this patch?The patch mitigates the following vulnerabilities: CVE-2018-1000199: This vulnerability affects the Linux kernel. It allows an unprivileged user or process to crash the system kernel, leading to a DoS attack or privilege escalation. This is rated as a High vulnerability, with a CVSS of 7.8. CVE-2018-8897: This vulnerability affects the Linux kernel. It allows an unprivileged user or process to crash the system kernel, leading to a DoS attack. This is rated as a Medium vulnerability, with a CVSS of 6.5. CVE-2018-1087: This vulnerability affects Linux kernel's KVM hypervisor. This allows an unprivileged process to crash the guest kernel or potentially gain privileges. This vulnerability is patched in the infrastructure that Kubernetes Engine runs on, so Kubernetes Engine is unaffected. This is rated as a High vulnerability, with a CVSS score of 8.0. |
High |
March 12, 2018
Description | Severity | Notes |
---|---|---|
The Kubernetes project recently disclosed new security vulnerabilities, CVE-2017-1002101 and CVE-2017-1002102, allowing containers to access files outside the container. All Kubernetes Engine nodes are affected by these vulnerabilities, and we recommend that you upgrade to the latest patch version as soon as possible, as we detail below. What should I do?Due to the severity of these vulnerabilities, whether you have node auto-upgrades enabled or not, we recommend that you manually upgrade your nodes as soon as the patch becomes available. The patch will be available for all customers by March 16th, but it may be available for you sooner based on the zone your cluster is in, according to the release schedule. In order to upgrade, you must first upgrade your master to the newest version. This patch will be available in Kubernetes 1.9.4-gke.1, Kubernetes 1.8.9-gke.1, and Kubernetes 1.7.14-gke.1. New clusters will use the patched version by default by March 30; if you create a new cluster before then, you must specify the patched version for it to be used. Kubernetes Engine customers who have node auto-upgrades enabled and who do not manually upgrade will have their nodes upgraded to patched versions by April 23rd. However, due to the nature of the vulnerability, we recommend you manually upgrade your nodes as soon as the patch is available to you. What vulnerabilities are addressed by this patch?The patch mitigates the following two vulnerabilities: The vulnerability CVE-2017-1002101 allows containers using subpath volume mounts to access files outside of the volume. This means that if you are blocking container access to hostpath volumes with PodSecurityPolicy, an attacker with the ability to update or create pods can mount any hostpath using any other volume type. The vulnerability CVE-2017-1002102 allows containers using certain volume types - including secrets, config maps, projected volumes, or downward API volumes - to delete files outside of the volume. This means that if a container using one of these volume types is compromised, or if you allow untrusted users to create pods, an attacker could use that container to delete arbitrary files on the host. To learn more about the fix, read the Kubernetes blog post. |
High |
Google Distributed Cloud security bulletins
October 16, 2019
Description | Severity | Notes |
---|---|---|
A vulnerability was recently discovered in Kubernetes, described in CVE-2019-11253, which allows any user authorized to make POST requests to execute a remote Denial-of-Service attack on a Kubernetes API server. The Kubernetes Product Security Committee (PSC) released additional information on this vulnerability which can be found here. You can mitigate this vulnerability by limiting which clients have network access to your Kubernetes API servers. What should I do?We recommend that you upgrade your clusters to a patch version containing the fix as soon as they are available. The patch versions which will contain the fix are listed below:
What vulnerability is addressed by this patch?The patch mitigates the following vulnerability: CVE-2019-11253. |
High |
August 23, 2019
Description | Severity | Notes |
---|---|---|
We recently discovered and mitigated a vulnerability where the RBAC proxy used for securing monitoring endpoints did not correctly authorize users. As a result, metrics from certain components are available to unauthorized users from within the internal cluster network. The following components were affected:
What should I do?We recommend that you upgrade your clusters to version 1.0.2-gke.3, which includes the patch for this vulnerability, as soon as possible. |
Medium |
August 22, 2019
Description | Severity | Notes |
---|---|---|
Kubernetes recently discovered a vulnerability, CVE-2019-11247, which allows cluster-scoped custom resource instances to be acted on as if they were namespaced objects existing in all Namespaces. This means user and service accounts with only namespace-level RBAC permissions can interact with cluster-scoped custom resources. Exploiting this vulnerability requires the attacker to have privileges to access the resource in any namespace. What should I do?We recommend that you upgrade your clusters to version 1.0.2-gke.3, which includes the patch for this vulnerability, as soon as possible. What vulnerability is addressed by this patch?The patch mitigates the following vulnerability: CVE-2019-11247. |
Medium |