控制層元件之間的流量:控制層上執行的各種元件之間的流量,都會經過 TLS 驗證和加密。流量絕不會離開防火牆防護下 Google 擁有的網路。
信任根
GKE 的設定如下:
叢集根憑證授權單位 (CA) 是用來驗證 API 伺服器和 kubelet 的用戶端憑證。也就是說,控制層和節點都擁有相同的信任根。叢集節點集區中的任何 kubelet 都可使用 certificates.k8s.io API 提交憑證簽署要求,向這個 CA 要求憑證。如「叢集根 CA 生命週期」一節所述,根 CA 的生命週期有限。
在控制層 VM 上執行 etcd 資料庫執行個體的叢集中,每個叢集會使用獨立的 etcd 對等互連 CA,在 etcd 執行個體之間建立信任關係。
在所有 GKE 叢集中,系統會使用獨立的 etcd API CA,在 Kubernetes API 伺服器和 etcd API 之間建立信任關係。
API 伺服器和 kubelet
API 伺服器和 kubelet 均依賴叢集根 CA 取得信任。在 GKE 中,控制層 API 憑證是由叢集根 CA 簽署。每個叢集會執行各自的 CA,正因如此,就算有某個叢集的 CA 遭駭,其他叢集的 CA 也不會受到影響。
這個 CA 的根金鑰是由內部服務管理,且無法匯出。
這項服務會接受憑證簽署的要求,包括每個 GKE 叢集中 kubelet 發出的要求。即使叢集中的 API 伺服器遭駭,CA 也不會遭到入侵,所以其他叢集不會受到影響。
[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-01 (世界標準時間)。"],[],[],null,["# Cluster trust\n\n[Autopilot](/kubernetes-engine/docs/concepts/autopilot-overview) [Standard](/kubernetes-engine/docs/concepts/choose-cluster-mode)\n\n*** ** * ** ***\n\nThis page describes the trust model in Google Kubernetes Engine (GKE) clusters, including\ncommunication within clusters and how requests are authenticated\nfor components like control planes.\n\nThis document is for\nSecurity specialists who want to understand GKE's cluster trust model.\nTo learn more about\ncommon roles and example tasks that we reference in Google Cloud content, see\n[Common GKE user roles and tasks](/kubernetes-engine/enterprise/docs/concepts/roles-tasks).\n\nBefore reading this page, ensure that you're familiar with the following:\n\n- [General overview of GKE security](/kubernetes-engine/docs/concepts/security-overview)\n- [General overview of GKE networking](/kubernetes-engine/docs/concepts/network-overview)\n\nIntracluster communication\n--------------------------\n\nGKE applies various security mechanisms to traffic between\ncluster components, as follows:\n\n- **Traffic between the control plane and nodes** : the control plane communicates\n with a node for managing containers. When the control plane sends a request,\n (for example, `kubectl logs`) to nodes, the request is sent over a Konnectivity\n proxy tunnel. Requests that the control plane sends are authenticated and\n protected by TLS. When a node sends a\n request to the control plane, for example, from the kubelet to the API server,\n that request is authenticated and encrypted using mutual TLS (mTLS).\n\n All newly created and updated clusters use TLS 1.3 for control plane to node\n communication. TLS 1.2 is the minimum supported version for control plane to\n node communication.\n- **Traffic between nodes** : nodes are Compute Engine VMs. Connections between\n nodes inside the Google Cloud production network are authenticated and\n encrypted. For details, refer to the VM-to-VM section of the\n [Encryption in Transit whitepaper](/docs/security/encryption-in-transit#virtual_machine_to_virtual_machine).\n\n- **Traffic between Pods** : when a Pod sends a request to another Pod, that traffic\n isn't authenticated by default. GKE encrypts traffic between\n Pods on different nodes at the network layer. Traffic between Pods on the\n *same node* isn't encrypted by default. For details about the network-layer\n encryption, see\n [Google Cloud virtual network encryption and authentication](/docs/security/encryption-in-transit#virtual-network).\n\n You can restrict Pod-to-Pod traffic with a\n [NetworkPolicy](/kubernetes-engine/docs/how-to/network-policy), and you can\n encrypt all Pod-to-Pod traffic, including traffic on the same node, by using a\n service mesh like [Cloud Service Mesh](/anthos/service-mesh)\n or a different method of application-layer encryption.\n- **Traffic between control plane components**: traffic between various\n components that run on the control plane is authenticated and encrypted using\n TLS. The traffic never leaves a Google-owned network that's protected by\n firewalls.\n\nRoot of trust\n-------------\n\nGKE has the following configuration:\n\n- The [cluster root Certificate Authority (CA)](https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/) is used to validate the API server and kubelets' client certificates. That is, control planes and nodes have the same root of trust. Any kubelet within the cluster node pool can request a certificate from this CA using the `certificates.k8s.io` API, by submitting a [certificate signing request](https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/#create-a-certificate-signing-request). The root CA has a limited lifetime as described in the [Cluster root CA lifetime](#root-ca-lifetime) section.\n- In clusters that run etcd database instances on the control plane VMs, a separate per-cluster etcd peer CA is used to establish trust between etcd instances.\n- In all GKE clusters, a separate etcd API CA is used to establish trust between the Kubernetes API server and the etcd API.\n\n### API server and kubelets\n\nThe API server and the kubelets rely on the cluster root CA for trust. In\nGKE, the control plane API certificate is signed by the\ncluster root CA. Each cluster runs its own CA, so that if one cluster's CA is\ncompromised, no other cluster CA is affected.\n\nAn internal service manages root keys for this CA, which are non-exportable.\nThis service accepts certificate signing requests, including those from the kubelets\nin each GKE cluster. Even if the API server in a cluster were\ncompromised, the CA would not be compromised, so no other clusters would be\naffected.\n\nEach node in the cluster is injected with a shared secret at creation, which it\ncan use to submit certificate signing requests to the cluster root CA and obtain\nkubelet client certificates. These certificates are then used by the kubelet to\nauthenticate its requests to the API server. This shared secret is reachable by\nPods on the node, unless you enable\n[Shielded GKE Nodes](/kubernetes-engine/docs/how-to/shielded-gke-nodes) or\n[Workload Identity Federation for GKE](/kubernetes-engine/docs/how-to/workload-identity).\n\n#### Cluster root CA lifetime\n\nThe cluster root CA has a limited lifetime, after which any\ncertificates signed by the expired CA are invalid. Check the approximate expiry\ndate of your cluster's CA by following the instructions in\n[Check credential lifetime](/kubernetes-engine/docs/how-to/credential-rotation#check_credential_lifetime).\n\nYou should manually rotate your credentials before your existing root CA\nexpires. If the CA expires and you don't rotate your credentials, your cluster\nmight enter an unrecoverable state. GKE has the following\nmechanisms to try and prevent unrecoverable clusters:\n\n- Your cluster enters a `DEGRADED` state seven days before CA expiry\n- GKE attempts an automatic credential rotation 30 days\n before CA expiry. This automatic rotation ignores maintenance windows and\n might cause disruptions as GKE recreates nodes to use new\n credentials. External clients, like kubectl in local environments, won't\n work until you update them to use the new credentials.\n\n | **Note:** This is a last resort to prevent an unrecoverable cluster. Don't rely on automatic rotation---instead, plan to rotate your credentials manually during maintenance periods well in advance of CA expiry.\n\nTo learn how to perform a rotation, see\n[Rotate your cluster credentials](/kubernetes-engine/docs/how-to/credential-rotation).\n\n### Cluster state storage\n\nGKE clusters store the state of Kubernetes API objects as\nkey-value pairs in a database. The Kubernetes API server in your control plane\ninteracts with this database by using the\n[etcd API](https://etcd.io/docs/latest/learning/api/). GKE uses\none of the following technologies to run the cluster state database:\n\n- **etcd**: the cluster uses etcd instances that run on the control plane VMs.\n- **Spanner** : the cluster uses a [Spanner](/spanner) database that runs outside of the control plane VMs.\n\nRegardless of the database technology that a cluster uses, every\nGKE cluster serves the etcd API in the control plane. To encrypt\ntraffic that involves the cluster state database, GKE uses one or\nboth of the following per-cluster CAs:\n\n- **etcd API CA**: used to sign certificates for traffic to and from etcd API endpoints. An etcd API CA runs in every GKE cluster.\n- **etcd peer CA**: used to sign certificates for traffic between etcd database instances on the control plane. An etcd peer CA runs in any cluster that uses etcd databases. Clusters that use Spanner databases don't use the etcd peer CA.\n\nRoot keys for the etcd API CA are distributed to the metadata of each\nCompute Engine instance that the control plane runs on. The etcd API\nCA isn't shared between clusters.\n\nThe etcd CA certificates are valid for five years. GKE\nautomatically rotates these certificates before they expire.\n\nCertificate rotation\n--------------------\n\nTo rotate all your cluster's API server and kubelet certificates, perform a\n[credential rotation](/kubernetes-engine/docs/how-to/credential-rotation). There is no way for you to trigger a rotation of the etcd\ncertificates; this is managed for you in GKE.\n| **Note:** Performing a credential rotation causes GKE to upgrade all node pools to the closest supported node version, and causes brief downtime for the cluster API.\n\nWhat's next\n-----------\n\n- [Read more about Managing TLS certificates in the Kubernetes documentation](https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/)."]]