Encrypt your data in-transit in GKE with user-managed encryption keys


This page shows you how to enable encryption of in-transit data for Pod communications across Google Kubernetes Engine (GKE) nodes with user-managed encryption keys.

By default, Google encrypts all data in-transit between VMs at the network interface controller (NIC) level to ensure the confidentiality of the data in-transit, regardless of what service or application is running on the VM (including GKE). This layer of encryption is applicable to all GKE nodes and Pod traffic. The encryption keys are provided and managed by Google.

With inter-node transparent encryption for GKE, Google gives you more control over the encryption keys used to encrypt Pod traffic across GKE nodes. GKE performs this encryption using WireGuard in GKE Dataplane V2, in addition to the default encryption provided by VM NICs.

Providing this control over the encryption keys directly in GKE is useful if you're in a regulated industry and you have a business need for compliance and security audits.

You can enable inter-node transparent encryption in single and multi-cluster environments. For more information about how this feature works, see How inter-node transparent encryption works with GKE.

Limitations

  • This feature on its own doesn't guarantee that Google can't access the encryption keys stored in the GKE node memory. In some regulated environments or jurisdictions, or to meet specific compliance requirements, you might want to further encrypt these keys and control access. To do this, we recommend that you use inter-node transparent encryption with Confidential GKE Nodes that use customer-managed encryption keys (CMEK). Confidential GKE Nodes that use CMEK encrypt the memory contents of the nodes with keys that you manage.

  • Inter-node transparent encryption for GKE is only supported on GKE Dataplane V2 clusters.

  • GKE Autopilot is not supported.

  • Inter-node transparent encryption for GKE uses WireGuard. WireGuard is not FIPS compliant.

  • Encryption keys are not dynamically rotated. Key rotation needs to be handled manually by restarting the nodes.

  • Inter-node transparent encryption along with Confidential GKE Nodes works only on Container-Optimized OS (COS) and Ubuntu, and not on Windows.

  • Inter-node transparent encryption does not encrypt network traffic initiated by the GKE node or a Pod using the hostNetwork.

  • Inter-node transparent encryption does not encrypt network traffic sent to a Pod exposed on a node port. Even when ExternalTrafficPolicy: Cluster is configured on the Service, the traffic forwarded from the first node receiving traffic from the client to the backend Pod is not encrypted.

  • The maximum number of nodes supported with inter-node transparent encryption enabled for single cluster or multi-cluster configurations is 500.

  • Inter-node transparent encryption might result in the nodes being oversubscribed. You might expect 15% CPU increase on average on n2-standard-8 nodes with the Ubuntu OS with 2 Gbps throughput.

    The increase in CPU utilization is not attributed to any Pod because it is not aware by the kube-scheduler. The Pod with increased traffic might use all CPU resources on the node. This can prevent other Pods from getting the CPU resources they need, even if they are properly configured. This can cause problems for Pods that are trying to run sensitive workloads or that need to be able to respond quickly to requests. As a workaround, you can keep a significant amount of CPU unscheduled on nodes that have inter-node transparent encryption enabled. Alternatively, you can schedule a Pod with a low PriorityClass that has a large CPU request but never uses this CPU.

  • Inter-node transparent encryption incurs 150 microseconds of latency on two nodes in the same zone that don't use Confidential GKE Nodes.

  • When you enable inter-node transparent encryption, traffic observability features used for tracking traffic on the Pods might not work as expected because the data in-transit is encrypted with keys that are not accessible to the underlying Google infrastructure.

  • When you enable inter-node transparent encryption, Pod IP addresses are not visible on the VPC. Features that depend on packet inspection such as Packet Mirroring and Pod CIDR based VPC firewall rules are not compatible with inter-node transparent encryption.

  • When you enable inter-node transparent encryption across clusters attached to different VPC subnets, you need to manually create firewall rules to allow communications between the cluster nodes.

  • Inter-node transparent encryption turns off the some Layer 7 capabilities of GKE Dataplane V2. As a result, you can't enable FQDN network policy and inter-node transparent encryption at the same time.

  • You cannot enable this feature at the same time as intranode visibility.

Before you begin

Before you start, make sure you have performed the following tasks:

  • Enable the Google Kubernetes Engine API.
  • Enable Google Kubernetes Engine API
  • If you want to use the Google Cloud CLI for this task, install and then initialize the gcloud CLI. If you previously installed the gcloud CLI, get the latest version by running gcloud components update.
  • Follow the instructions to Enable GKE Enterprise.

  • GKE inter-node transparent encryption is only supported on Google Cloud CLI version 458.0.0 and later and the following GKE versions:

    • 1.26.10-gke.1024000 or later
    • 1.27.7-gke.1506000 or later
    • 1.28.2-gke.1098000 or later

Enable inter-node transparent encryption with GKE

You can enable inter-node transparent encryption with GKE on a single cluster or in a multi-cluster environment.

Enable inter-node transparent encryption on a new cluster

  1. To enable inter-node transparent encryption on a new cluster:

    gcloud container clusters create CLUSTER_NAME \
        --region=REGION
        --enable-datapane-v2 \
        --in-transit-encryption inter-node-transparent
    

    Replace the following:

    • CLUSTER_NAME with the name of your cluster.
    • REGION with the compute region of your cluster.
  2. To verify your configuration, use the following command to check the encryption status:

    kubectl -n kube-system exec -ti anetd-XXXX -- cilium status | grep Encryption
    

    The output is similar to the following:

    Encryption: Wireguard [cilium_wg0 (Pubkey: <key>, Port: 51871, Peers: 2)]
    

Enable inter-node transparent encryption on an existing cluster

  1. To enable inter-node transparent encryption on an existing cluster:

    gcloud container clusters update CLUSTER_NAME \
      --in-transit-encryption inter-node-transparent
      --region=REGION
    

    Replace the following:

    • CLUSTER_NAME with the name of your cluster.
    • REGION with the compute region of your cluster.
  2. To check that the Google Cloud CLI command completed successfully :

    gcloud container clusters describe CLUSTER_NAME \
        --region=REGION
        --format json | jq .status
    

    Replace the following:

    • CLUSTER_NAME with the name of your cluster.
    • REGION with the compute region of your cluster.

    Wait until the status displays "RUNNING". Enabling inter-node encryption in GKE will automatically restart the nodes. It might take several hours for the node restart to occur and for the new nodes to enforce policies.

  3. To confirm that nodes have restarted:

    kubectl get nodes
    

    Check the AGE field of each node and proceed if the AGE field reflects new nodes.

  4. To verify your configuration, you can use the following command to check the encryption status:

    kubectl -n kube-system exec -ti anetd-XXXX -- cilium status | grep Encryption
    

    The output is similar to the following:

    Encryption: Wireguard [cilium_wg0 (Pubkey: <key>, Port: 51871, Peers: 2)]
    

    Verify that the number of peers is one less than the number of nodes in your cluster. For example, in a cluster with 24 nodes, the number of peers should be 23. If the number of peers isn't one less than the number of nodes in the cluster, restart the anetd agent on your nodes again.

Enable inter-node transparent encryption across clusters

Inter-node transparent encryption is not supported on Autopilot clusters. If your fleet includes Autopilot clusters, they won't be able to communicate with Standard clusters that have encryption enabled.

To enable inter-node transparent encryption in a multi-cluster environment:

  1. Enable inter-node transparent encryption on a new cluster or in an existing cluster.

  2. Register your cluster to a fleet.

  3. Enable inter-node transparent encryption for the fleet:

    gcloud container fleet dataplane-v2-encryption enable --project PROJECT_ID
    

    Replace PROJECT_ID with your project ID.

  4. Verify status on all nodes:

    kubectl -n kube-system get pods -l k8s-app=cilium -o name | xargs -I {} kubectl -n kube-system exec -ti {} -- cilium status
    

    The output is similar to the following:

    ...
    Encryption: Wireguard [cilium_wg0 (Pubkey: <key>, Port: 51871, Peers: 5)]
    ...
    

Disable inter-node transparent encryption

In some cases, you might want to disable inter-node transparent encryption in your GKE cluster for performance improvements, or to troubleshoot connectivity for your application. Before proceeding with this operation, consider the following:

  • Inter-node transparent encryption is enabled for the entire cluster and you can't partially disable it in individual Kubernetes resources such as namespaces or Pods.

  • Perform this operation during a maintenance window as this operation will disrupt the traffic of your Pod.

On a single cluster

To disable inter-node transparent encryption on a single cluster:

gcloud container clusters update CLUSTER_NAME \
    --region=REGION
    --in-transit-encryption none

Replace the following:

  • CLUSTER_NAME: with your name of your cluster.

  • REGION: with the compute region of your cluster.

Disable in a cluster that's part of a fleet

You can turn off encryption for a cluster in a fleet by using either of the following two options:

  • To completely remove the cluster from the fleet, unregister your cluster.

  • Alternatively, keep the cluster in the fleet but disable encryption:

    gcloud container fleet dataplane-v2-encryption disable --project PROJECT_ID
    

    Replace PROJECT_ID with your project ID.

    Disabling encryption with this command initiates the removal of remote nodes from the Wireguard peers list on each cluster. This process can take up to several minutes to complete, depending on the number of clusters and nodes involved. To see the updated peer count, you'll need to manually refresh the WireGuard peers list on each cluster. You can use your cluster management tool or the following command:

    kubectl -n kube-system exec -ti anetd-XXXX -- cilium status | grep Encryption
    

Disable for an entire fleet of clusters

  • To disable inter-node transparent encryption in a fleet:

    gcloud container fleet dataplane-v2-encryption disable --project PROJECT_ID
    

    Replace PROJECT_ID with your project ID.

  • To disable inter-node transparent encryption and remove the now unused API, disable the GKE Dataplane V2 API at the fleet level. This will turn off the GKE Dataplane V2 controller running in your fleet.

    gcloud services disable gkedataplanev2.googleapis.com \
        --project=PROJECT_ID
    

    Replace PROJECT_ID with your project ID.

    To efficiently manage clusters with the same name and ensure multi-cluster encryption activation, follow these steps:

    1. Unregister the old cluster from the fleet before creating a new one with the same name.

    2. Re-register the new cluster upon recreation.

    3. If you forget to unregister a cluster, delete the old membership, and recreate the new cluster with a new membership.

    Failure to follow these steps may result in multi-cluster encryption not activating on the new cluster until the fleet membership is recreated.

How inter-node transparent encryption works with GKE

The following sections describe how inter-node transparent encryption works when you enable it in your cluster:

Encryption of network traffic between two Pods on different nodes

With inter-node transparent encryption enabled, GKE Dataplane V2 encrypts Pod-to-Pod traffic if Pods are on different nodes, regardless of the cluster to which those nodes belong. When the clusters are part of the same fleet, they belong to the same encryption domain.

Clusters with different inter-node transparent encryption configurations can co-exist in the same fleet. If you have a multi-cluster environment in which only some clusters use inter-node transparent encryption, the following considerations apply:

  • Pod-to-Pod communication between nodes in the same cluster is encrypted using the public/private key pair.

  • Pod-to-Pod communication between a node in a cluster that has inter-node transparent encryption enabled and a node in a cluster that doesn't have inter-node transparent encryption enabled fails.

Encryption key generation and usage

When the feature is enabled, every GKE node in the cluster automatically generates a public/private key pair known as the encryption keys.

  • The private key is stored in memory (not on disk) and never leaves the node. Using GKE Confidential Nodes further decreases the risk of keys being compromised because the node memory is also encrypted (with different keys).

  • The public key is shared with other nodes using the GKE Dataplane V2 control plane and is accessible to all nodes in the same encryption domain.

After the keys are exchanged, each node can establish a WireGuard tunnel with other nodes in the same encryption domain. Each tunnel is unique for a given pair of nodes.

Consider the following when dealing with the private or public key pairs and session key:

  • Private/public key pair:

    • The public key is distributed in the cluster and all nodes in the cluster can access the public key.

    • The key pair is rotated when the node restarts. GKE doesn't rotate keys at regular intervals. To manually trigger a key rotation, drain and restart the node. This invalidates the original key pair and generates a new key pair.

  • Session key:

    • This key is not configurable.

    • This key gets periodically rotated every two minutes.

    • The session key is exclusive to the nodes involved in the tunnel.

What's next