Containers & Kubernetes

Exploring container security: Use your own keys to protect your data on GKE

Google Cloud Container Security.png

At Google Cloud, we already encrypt data at rest by default, including data in Google Kubernetes Engine (GKE). However, we understand that you may need additional controls over encryption in GKE, especially for sensitive data that is used or accessed by applications running there.

Today, we’re releasing two features to help you protect and control your GKE  environment and support regulatory requirements: the general availability of GKE application-layer Secrets encryption, so you can protect your Kubernetes Secrets with envelope encryption; and customer-managed encryption keys (CMEK) for GKE persistent disks in beta, giving you more control over encryption of persistent disks.

On Kubernetes Engine, your data is already encrypted at rest by default, including both  the disks attached to GKE nodes, and Kubernetes secret objects stored in the control plane database. These features give you control over the encryption keys used to encrypt your data.

Persistent disks in Kubernetes
Persistent disks in GKE are already encrypted at the hardware layer by default, if you have a need to add additional encryption where you manage the encryption keys, read on.

The Container Storage Interface (CSI) provides a standard for exposing arbitrary storage systems to workloads running in Kubernetes. CSI is generally available as of Kubernetes v1.13.  This allows third-party storage providers to easily develop plugins for new storage systems without having to touch the core Kubernetes code—giving you more storage options. This is what lets us use encrypted Compute Engine persistent disks in Kubernetes!

To encrypt persistent disks in GKE, you must use the GCP Persistent Disk CSI plugin, which lets you protect disks in GKE with a key that you manage in Cloud KMS—by creating a StorageClass referencing a key. This encryption key is used to encrypt the disks created with that StorageClass. If your organization is required to manage its own key material, the CSI plugin provides the same functionality available in traditional CMEK for persistent disks in GKE.

First you will need to create a Cloud KMS key to use for encryption, then you can create a StorageClass that specifies the Cloud KMS key KMS_KEY_ID to use to encrypt the disk:

  apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
  name: csi-gce-pd
provisioner: pd.csi.storage.gke.io
parameters:
  type: pd-standard
  disk-encryption-kms-key: KMS_KEY_ID

Storage volumes provided via CSI can be automatically created or deleted using dynamic provisioning. When you create a PersistentVolumeClaim, this triggers the dynamic creation of a new PersistentVolume using the specified StorageClass. (The CSI external volume plugin provisions a new PersistentVolume, and Kubernetes binds it to the PersistentVolumeClaim and makes it ready for use.)

To create a PersistentVolumeClaim using the above StorageClass:

  kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: podpvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: csi-gce-pd
  resources:
    requests:
      storage: 6Gi

Then, to create a CMEK-protected persistent disk, deploy the PersistentVolumeClaim on your GKE cluster:

  kubectl apply -f pvc.yaml

Note that you can only use CMEK to protect data on new persistent disks attached to nodes in your cluster—you cannot use it to protect existing persistent disks. Nor can you use CMEK to protect node boot disks or control plane disks. To use CMEK with regional clusters, you must use GKE version 1.14 or higher, and the GCE PD CSI driver version 0.5.1 or higher.

Secrets in Kubernetes
The GKE control plane stores API objects, including Kubernetes secrets, inside the etcd database, which sits on a disk encrypted with a Google-managed key.

To add more protection for secrets, Kubernetes has allowed for application-layer envelope encryption of Secrets with a KMS provider since v1.10. A local key is used to encrypt the Secrets (known as a “data encryption key”), and the key is itself encrypted with another key (a “key encryption key”) stored in a key management service, not in Kubernetes. This model allows you to regularly rotate the key encryption key without having to re-encrypt all the Secrets.

Application-layer Secrets encryption is now generally available in GKE, so you can protect Secrets with envelope encryption: your Secrets are encrypted with a local AES-256-CBC data encryption key, which is encrypted with a key encryption key that you manage in Cloud KMS.

You can use application-layer Secrets encryption with new clusters and existing clusters. To enable it, specify the --database-encryption-key flag as part of your cluster creation/update, with your Cloud KMS key KMS_KEY_ID:

  gcloud container clusters update [CLUSTER_NAME] \
  --database-encryption-key [KMS_KEY_ID]

Note that you can also use HSM-backed keys as the key encryption key! The only restriction is that you must use a Cloud KMS key in the same location as your cluster.

Support for application-layer Secrets encryption and customer-managed encryption keys provides new levels of assurance to organizations that want to use GKE to process sensitive data. To get started today, check out the documentation on using customer-managed encryption keys for GKE and application-layer Secrets encryption

The GKE team’s David Zhu, Software Engineer, and Alex Tcherniakovski, Security Engineer, both contributed to this post.