Cluster Trust

This page describes the trust in Google Kubernetes Engine clusters, including how masters and nodes authenticate requests.

Intracluster communication

There are many connections in a cluster for communication between Kubernetes components.

Master-to-node
The master communicates with a node for managing containers. When the master sends a request to the node, for example, kubectl logs, that request is sent over an SSH tunnel, and furthermore protected with unauthenticated TLS, providing integrity and encryption. When a node sends a request to the master, for example, kubelet to API server, that request is authenticated and encrypted using mutual TLS.
Node-to-node
A node may communicate with another node as part of a specific workload. When the node sends a request to another node, that request is authenticated, and will be encrypted if that connection crosses a physical boundary controlled by Google. Note that no Kubernetes components require node-to-node communication. To learn more, read the Encryption in Transit whitepaper.
Pod-to-Pod
A Pod may communicate with another Pod as part of a specific workload. When the Pod sends a request to another Pod, that request is neither authenticated nor encrypted. Note that no Kubernetes components require Pod-to-Pod communication. Pod-to-Pod traffic can be restricted with a Network Policy, and can be encrypted using a service mesh like Istio or otherwise implementing application-layer encryption.
etcd-to-etcd
An instance of etcd may communicate with another instance of etcd to keep state updated. When an instance of etcd sends a request to another instance, that request is authenticated and encrypted using mutual TLS. The traffic never leaves a GKE-owned network protected by firewalls.
Master-to-etcd
This communication is entirely over localhost, and is not authenticated or encrypted.

Root of trust

GKE is configured such that:

  • The cluster root Certificate Authority (CA) is used to validate the API server and kubelets’ client certificates. That is, masters and nodes have the same root of trust. Any kubelet within the cluster node pool can request a certificate from this CA using the certificates.k8s.io API, by submitting a certificate signing request.
  • A separate per-cluster etcd CA is used to validate etcd’s certificates.

API server and kubelets

The API server and kubelets rely on Kubernetes’ cluster root CA for trust. In GKE, the master API certificate is signed by the cluster root CA. Each cluster runs its own CA, so that if one cluster’s CA were to be compromised, no other cluster CA would be affected.

An internal Google service manages root keys for this CA, which are non-exportable. This service accepts certificate signing requests, including those from the kubelets in each GKE cluster. Even if the API server in a cluster were compromised, the CA would not be compromised, so no other clusters would be affected.

Each node in the cluster is injected with a shared Secret at creation, which it can use to submit certificate signing requests to the cluster root CA and obtain kubelet client certificates. These certificates are then used by the kubelet to authenticate its requests to the API server. Note that this shared Secret is reachable by Pods, unless metadata concealment is enabled.

The API server and kubelet certs are valid for five years, but they can be manually rotated sooner by performing a credential rotation.

etcd

etcd relies on a separate per-cluster etcd CA for trust in GKE.

Root keys for the etcd CA are distributed to the metadata of each VM on which the master runs. Any code executing on master VMs, or with access to compute metadata for these VMs, can sign certificates as this CA. Even if etcd in a cluster were compromised, the CA is not shared between clusters, so no other clusters would be affected.

The etcd certs are valid for five years.

Rotating your certificates

To rotate all your cluster’s API server and kubelet certificates, perform a credential rotation. There is no way for you to trigger a rotation of the etcd certificates; this is managed for you in GKE.

What's next

Was this page helpful? Let us know how we did:

Send feedback about...

Kubernetes Engine