控制平面组件包括 Kubernetes API 服务器、调度程序、控制器和 Kubernetes 配置持久保存所在的 etcd 数据库。在 GKE 中,Kubernetes 控制平面组件由 Google 管理和维护,而本地管理员在 Google Distributed Cloud 中管理控制平面组件。
在 Google Distributed Cloud 中,控制平面组件在公司网络中运行。您可以使用现有的公司网络政策和防火墙来保护 Kubernetes API 服务器。您还可以为 API 服务器分配内部 IP 地址,并限制对该地址的访问。
Google Distributed Cloud 中的所有通信都通过 TLS 通道进行,这些通道受三个证书授权机构 (CA) 的约束:etcd、cluster 和 org:
etcd CA 可保护从 API 服务器到 etcd 副本之间的通信,以及 etcd 副本之间的流量。此 CA 是自签名的。
集群 CA 可保护 API 服务器与所有内部 Kubernete API 客户端(kubelet、控制器、调度程序)之间的通信。此 CA 是自签名的。
组织 CA 是一个外部 CA,用于向外部用户提供 Kubernetes API。您负责管理此 CA。
对于管理员控制平面,密钥存储在控制平面节点上。对于用户集群,密钥以 Kubernetes Secret 形式存储在管理员控制平面中。API 服务器使用由组织 CA 签名的用户提供的证书进行配置。API 服务器使用服务器名称指示 (SNI) 来确定是使用由集群 CA 签名的密钥,还是使用由组织 CA 签名的密钥。
[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-01。"],[],[],null,["This page describes the security features included in Google Distributed Cloud\n(software only) for VMware, including each layer of its infrastructure, and how\nyou can configure these security features to suit your needs.\n\nOverview\n\nGoogle Distributed Cloud (software only) for VMware offers several features to help\nsecure your [workloads](/kubernetes-engine/docs/how-to/deploying-workloads-overview), including the contents of your container image, the\ncontainer runtime, the cluster network, and access to the cluster API server.\n\nIt's best to take a layered approach to protecting your clusters and workloads.\nYou can apply the [principle of least privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege) to the level of\naccess you provide to your users and workloads. You might need to make tradeoffs\nto allow the right level of flexibility and security.\n\nAuthentication and Authorization\n\nYou authenticate to your [clusters](/kubernetes-engine/docs/concepts/cluster-architecture) using [OpenID Connect\n(OIDC)](https://developers.google.com/identity/protocols/OpenIDConnect) or a Kubernetes Service Account token via the\n[Cloud console](/kubernetes-engine/fleet-management/docs/console).\n\nTo configure more granular access to Kubernetes resources at the cluster level\nor within Kubernetes namespaces, you use Kubernetes\n[Role-Based Access Control (RBAC)](/kubernetes-engine/docs/how-to/role-based-access-control). RBAC allows you to create detailed policies\nthat define which operations and resources you want to allow users and service\naccounts to access. With RBAC, you can control access for any validated identity\nprovided.\n\nTo further simplify and streamline your authentication and\nauthorization strategy for Kubernetes Engine, Google Distributed Cloud disables\n[legacy Attribute Based Access Control (ABAC)](/kubernetes-engine/docs/reference/rest/v1/projects.zones.clusters#legacyabac).\n\nControl plane security\n\nThe control plane components include the Kubernetes API server, scheduler,\ncontrollers and the [etcd database](https://github.com/coreos/etcd) where your Kubernetes\nconfiguration is persisted. Whereas in GKE, the\nKubernetes control plane components are managed and maintained by\nGoogle, local administrators manage the control plane components in\nGoogle Distributed Cloud.\n\nIn Google Distributed Cloud, the control plane components run within your\ncorporate network. You can protect the Kubernetes API server by using\nyour existing corporate network policies and firewalls. You can also assign a\ninternal IP address to the API server and limit access to the address.\n\nAll communication in Google Distributed Cloud is over TLS channels, which are\ngoverned by three certificate authorities (CAs): etcd, cluster, and org:\n\n- The etcd CA secures communication from the API server to the etcd replicas and also traffic between etcd replicas. This CA is self-signed.\n- The cluster CA secures communication between the API server and all internal Kubernetes API clients (kubelets, controllers, schedulers). This CA is self-signed.\n- The org CA is an external CA used to serve the Kubernetes API to external users. You manage this CA.\n\nFor admin control planes, keys are stored on the control plane\n[node](https://kubernetes.io/docs/concepts/architecture/nodes/). For user clusters, the keys are stored as Kubernetes\nSecrets in the admin control plane. The API server is configured with a\nuser-provided certificate signed by the org CA. The API server uses Server Name\nIndication (SNI) to determine whether to use the key signed by the cluster CA or\nthe key signed by the org CA.\n\nCluster authentication is handled by certificates and service account bearer\ntokens. As the administrator, you authenticate to the control plane using OIDC,\nor with the administrative certificate (which you use for initial role binding\ncreation, or for emergency purposes).\n\nCertificate rotation is handled in the following ways:\n\n- For the API server, control planes, and nodes, certificates are created or rotated at each upgrade.\n- CAs can be rotated rarely or on demand.\n\nNode security\n\nGoogle Distributed Cloud deploys your workloads in VMware instances, which are\nattached to your clusters as nodes. The following sections show you how to\nuse the node-level security features available to you.\n\nUbuntu\n\nGoogle Distributed Cloud uses an optimized version of [Ubuntu](http://www.ubuntu.com) as the\noperating system on which to run the Kubernetes control plane and nodes. Ubuntu\nincludes a rich set of modern [security features](https://wiki.ubuntu.com/Security/Features), and\nGoogle Distributed Cloud implements several security-enhancing features for\nclusters, including:\n\n- Images are preconfigured to meet PCI DSS, NIST Baseline High, and DoD Cloud Computing SRG Impact Level 2 standards.\n- Optimized [package](https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64-root.manifest) set.\n- Google Cloud-tailored [Linux kernel](https://git.launchpad.net/%7Ecanonical-kernel/ubuntu/+source/linux-gcp/+git/bionic/).\n- Optional [automatic OS security updates](https://help.ubuntu.com/community/AutomaticSecurityUpdates).\n- Limited user accounts and disabled root login.\n\nAdditional security guides are available for Ubuntu, such as:\n\n- [CIS Benchmark](https://www.cisecurity.org/benchmark/ubuntu_linux/)\n- [DISA STIG](http://public.cyber.mil/stigs/)\n- [FIPS 140-2](https://wiki.ubuntu.com/Security/Certification)\n\nNode upgrades\n\nYou should upgrade your nodes on a regular basis. From time to time, security\nissues in the container runtime, Kubernetes itself, or the node operating system\nmight require you to upgrade your nodes more urgently. When you upgrade your\ncluster, each node's software is upgraded to their latest versions.\n\nSecuring your workloads\n\nKubernetes allows users to quickly provision, scale, and update container-based\nworkloads. This section describes tactics that administrators and users can use\nto limit the ability of the running containers to affect other containers in the\ncluster, the hosts on which they run, and the Google Cloud services\nenabled in their project.\n\nLimiting Pod container process privileges\n\nLimiting the privileges of containerized processes is important for the overall\nsecurity of your cluster. Kubernetes Engine allows you to set security-related\noptions via the [Security Context](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) on both Pods and containers.\nThese settings allow you to change the security settings of your processes such\nas:\n\n- User and group to run as.\n- Available Linux capabilities.\n- Privilege escalation.\n\nThe default node operating system, Ubuntu, applies the default [Docker AppArmor\nsecurity policies](/container-optimized-os/docs/how-to/secure-apparmor#using_the_default_docker_apparmor_security_profile) to all containers started by Kubernetes. You can view the\nprofile's template on [GitHub](https://github.com/moby/moby/blob/master/profiles/apparmor/template.go). Among other things, the profile denies the\nfollowing abilities to containers:\n\n- Writing to files directly in a process ID directory (/proc/).\n- Writing to files that are not in /proc/.\n- Writing to files in /proc/sys other than /proc/sys/kernel/shm\\*.\n- Mounting filesystems.\n\nAudit logging\n\n[Kubernetes Audit logging](https://kubernetes.io/docs/tasks/debug-application-cluster/audit/) provides a way for administrators to retain, query, process, and\nalert on events that occur in your Google Distributed Cloud environments.\nAdministrators can use the logged information to do forensic analysis, real-time\nalerting, or for cataloging how a fleet of Kubernetes Engine clusters are being\nused and by whom.\n\nBy default, Google Distributed Cloud logs admin activity. You can optionally\nalso log data access events, depending on the types of operations you are\ninterested in inspecting.\n\nThe Connect agent only talks to the local API server running\non-premises, and each cluster should have its own set of audit logs. All actions\nthat users perform from the UI through Connect are logged by that\ncluster.\n\nEncryption\n\nIf your clusters and workloads securely connect to\nGoogle Cloud services through [Cloud VPN](/vpn/docs/concepts/overview), you can use Cloud Key Management Service\n(Cloud KMS) for key management. [Cloud KMS](/kms) is a cloud-hosted\nkey management service that lets you manage\ncryptographic keys for your services. You can generate, use, rotate, and destroy\nAES256, RSA 2048, RSA 3072, RSA 4096, EC P256, and EC P384 cryptographic keys.\nCloud KMS is integrated with [Identity and Access Management (IAM)](/iam) and [Cloud Audit\nLogging](/logging/docs/audit) so that you can manage permissions on individual keys and monitor how\nthese are used. Use Cloud KMS to protect Secrets and other sensitive data that\nyou need to store. Otherwise, you can choose to use one of the following\nalternatives:\n\n- Kubernetes Secrets\n- Hashicorp Vault\n- Thales Luna network HSM\n- Google Cloud Hardware Security Module (HSM)\n\nKubernetes Secrets\n\nKubernetes [Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) resources store sensitive data, such as passwords, OAuth\ntokens, and SSH keys, in your clusters. Storing sensitive data in Secrets is\nmore secure than storing them in plaintext [ConfigMaps](https://kubernetes.io/docs/concepts/configuration/configmap/) or in Pod\nspecifications. Using Secrets gives you control over how sensitive data is used,\nand reduces the risk of exposing the data to unauthorized users."]]