如果要启用 SELinux 来保护容器,则必须确保在所有宿主机上以 Enforced 模式启用 SELinux。从 GKE on Bare Metal 1.9.0 版或更高版本开始,您可以在创建集群或升级集群之前或之后启用或停用 SELinux。Red Hat Enterprise Linux (RHEL) 和 CentOS 上默认启用 SELinux。如果宿主机上停用了 SELinux,或者不确定,请参阅使用 SELinux 保护容器,了解如何启用 SELinux。
...
gcrKeyPath: (path to GCR service account key)sshPrivateKeyPath: (path to SSH private key, used for node access)gkeConnectAgentServiceAccountKeyPath: (path to Connect agent service account key)gkeConnectRegisterServiceAccountKeyPath: (path to Hub registration service account key)cloudOperationsServiceAccountKeyPath: (path to Cloud Operations service account key)
...
更改配置以指定 user 集群类型而不是 admin:
...
spec:
# Cluster type. This can be:
# 1) admin: to create an admin cluster. This can later be used to create
# user clusters.
# 2) user: to create a user cluster. Requires an existing admin cluster.
# 3) hybrid: to create a hybrid cluster that runs admin cluster
# components and user workloads.
# 4) standalone: to create a cluster that manages itself, runs user
# workloads, but does not manage other clusters.
type: user
...
...
# NodeConfig specifies the configuration that applies to all nodes in the cluster.
nodeConfig:
# podDensity specifies the pod density configuration.
podDensity:
# maxPodsPerNode specifies at most how many pods can be run on a single node.
maxPodsPerNode: 110
...
[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2024-05-03。"],[[["\u003cp\u003eUser clusters in Google Distributed Cloud run workloads and are managed by an admin cluster in a multi-cluster setup, ensuring separation and security.\u003c/p\u003e\n"],["\u003cp\u003eCreating a user cluster involves using the \u003ccode\u003ebmctl create config\u003c/code\u003e command to generate a YAML config file, which is then modified to remove local credentials, specify the cluster type as "user", and to enroll it in a fleet.\u003c/p\u003e\n"],["\u003cp\u003eTo deploy a user cluster, the \u003ccode\u003ebmctl create cluster\u003c/code\u003e command is used with the user cluster name and the admin cluster's kubeconfig file path.\u003c/p\u003e\n"],["\u003cp\u003eSELinux can be enabled on host machines to secure containers, and it's enabled by default on RHEL and CentOS systems.\u003c/p\u003e\n"],["\u003cp\u003eUser clusters can be registered in a fleet, allowing all clusters in a project to be enrolled in the GKE On-Prem API, unless disabled.\u003c/p\u003e\n"]]],[],null,["# Create user clusters\n\n\u003cbr /\u003e\n\nIn Google Distributed Cloud, user clusters run your workloads, and in a\n[multi-cluster architecture](/anthos/clusters/docs/bare-metal/1.16/installing/creating-clusters/create-clusters-overview),\nuser clusters are created and managed by an admin cluster.\n\nOnce you've created an admin cluster, calling the `bmctl create config` command\ncreates a YAML file you can edit to define your user cluster. To apply the\nconfiguration and create the user cluster, use the `bmctl create cluster`\ncommand. Preflight checks are applicable to the user clusters created with\n`bmctl create cluster` command.\n\nKeeping workloads off the admin cluster protects sensitive administrative data,\nlike SSH keys stored in the admin cluster, from those who don't need access\nto that information. Additionally, keeping user clusters separate from each\nother provides good general security for your workloads.\n\nPrerequisites\n-------------\n\n- Latest `bmctl` is downloaded (`gs://anthos-baremetal-release/bmctl/1.16.8/linux-amd64/bmctl`) from Cloud Storage.\n- Working admin cluster with access to the cluster API server (the `controlPlaneVIP`).\n- Admin cluster nodes have network connectivity to all nodes on the target user cluster.\n- Workstation running `bmctl` has network connectivity to all nodes in the target user clusters.\n- The admin workstation can establish an SSH connection to each of the user cluster nodes.\n- Connect-register service account is configured on the admin cluster for use with Connect.\n\nEnable SELinux\n--------------\n\nIf you want to enable SELinux to secure your containers, you must make sure that\nSELinux is enabled in `Enforced` mode on all your host machines. Starting with\nGoogle Distributed Cloud release 1.9.0 or later, you can enable or disable SELinux\nbefore or after cluster creation or cluster upgrades. SELinux is enabled by\ndefault on Red Hat Enterprise Linux (RHEL) and CentOS. If SELinux is disabled on\nyour host machines or you aren't sure, see\n[Securing your containers using SELinux](/anthos/clusters/docs/bare-metal/1.16/installing/configure-selinux)\nfor instructions on how to enable it.\n\nGoogle Distributed Cloud supports SELinux in only RHEL and CentOS systems.\n\nCreate a user cluster config file\n---------------------------------\n\nThe config file for creating a user cluster is almost exactly the same as the\none used for creating an admin cluster. The only difference is that you remove\nthe local credentials configuration section to make the config a valid\ncollection of Kubernetes resources. The configuration section is at the top of\nthe file under the `bmctl configuration variables` section. For examples of user\ncluster configurations, see\n[User clusters](/anthos/clusters/docs/bare-metal/1.16/reference/config-samples#user_clusters)\nin the Cluster configuration samples.\n\nBy default, user clusters inherit their credentials from the admin cluster that\nmanages them. You can selectively override some or all of these credentials.\n\n1. Create a user cluster config file with the `bmctl create config` command:\n\n bmctl create config -c \u003cvar translate=\"no\"\u003eUSER_CLUSTER_NAME\u003c/var\u003e\n\n For example, issue the following to create a config file for a user cluster\n called `user1`: \n\n bmctl create config -c user1\n\n The file is written to `bmctl-workspace/user1/user1.yaml`. The generic path\n to the file is `bmctl-workspace/`\u003cvar translate=\"no\"\u003eCLUSTER NAME\u003c/var\u003e`/`\u003cvar translate=\"no\"\u003eCLUSTER_NAME.yaml\u003c/var\u003e.\n2. Edit the config file with the following changes:\n\n - Remove the local credentials file paths from the config:\n\n ...\n gcrKeyPath: (path to GCR service account key)\n sshPrivateKeyPath: (path to SSH private key, used for node access)\n gkeConnectAgentServiceAccountKeyPath: (path to Connect agent service account key)\n gkeConnectRegisterServiceAccountKeyPath: (path to Hub registration service account key)\n cloudOperationsServiceAccountKeyPath: (path to Cloud Operations service account key)\n ...\n\n - Change the config to specify a cluster type of `user` instead of `admin`:\n\n ...\n spec:\n # Cluster type. This can be:\n # 1) admin: to create an admin cluster. This can later be used to create\n # user clusters.\n # 2) user: to create a user cluster. Requires an existing admin cluster.\n # 3) hybrid: to create a hybrid cluster that runs admin cluster\n # components and user workloads.\n # 4) standalone: to create a cluster that manages itself, runs user\n # workloads, but does not manage other clusters.\n type: user\n ...\n\n - Register your clusters to a [fleet](/anthos/fleet-management/docs) by specifying\n your project ID in the `gkeConnect.projectID` field. This project is referred to\n as the [fleet host project](/anthos/fleet-management/docs/fleet-concepts#fleet-host-project).\n\n ...\n gkeConnect:\n projectID: my-project-123\n ...\n\n - In 1.16 and later, if the GKE On-Prem API is enabled in your\n Google Cloud project, all clusters in the project are\n [enrolled in the GKE On-Prem API](/anthos/clusters/docs/bare-metal/1.16/how-to/enroll-cluster)\n automatically in the region configured in `clusterOperations.location`.\n\n - If you want to enroll all clusters in the project in the GKE On-Prem API,\n be sure to do the steps in\n [Before you begin](/anthos/clusters/docs/bare-metal/1.16/how-to/enroll-cluster#before_you_begin)\n to activate and use the GKE On-Prem API in the project.\n\n - If you don't want to enroll the cluster in the GKE On-Prem API, include\n this section and set `gkeOnPremAPI.enabled` to `false`. If you don't\n want to enroll any clusters in the project, disable\n `gkeonprem.googleapis.com` (the service name for the GKE On-Prem API)\n in the project. For instructions, see\n [Disabling services](/service-usage/docs/enable-disable#disabling).\n\n - Specify the IP address of the control plane node.\n\n ...\n # Sample control plane config\n controlPlane:\n nodePoolSpec:\n nodes:\n - address: 10.200.0.20\n ...\n\n - Ensure the admin and user cluster specifications for the load balancer VIPs\n and address pools are complementary, and don't overlap existing\n clusters. A sample pair of admin and user cluster configurations,\n specifying load balancing and address pools, is shown below:\n\n ...\n # Sample admin cluster config for load balancer and address pools\n loadBalancer:\n vips:\n controlPlaneVIP: 10.200.0.49\n ingressVIP: 10.200.0.50\n addressPools:\n - name: pool1\n addresses:\n - 10.200.0.50-10.200.0.70\n ...\n ...\n # Sample user cluster config for load balancer and address pools\n loadBalancer:\n vips:\n controlPlaneVIP: 10.200.0.71\n ingressVIP: 10.200.0.72\n addressPools:\n - name: pool1\n addresses:\n - 10.200.0.72-10.200.0.90\n ...\n\n The rest of the user cluster config files are the same as the\n admin cluster config.\n - Specify the pod density of cluster nodes:\n\n ...\n # NodeConfig specifies the configuration that applies to all nodes in the cluster.\n nodeConfig:\n # podDensity specifies the pod density configuration.\n podDensity:\n # maxPodsPerNode specifies at most how many pods can be run on a single node.\n maxPodsPerNode: 110\n ...\n\n For user clusters, allowable values for `maxPodsPerNode` are `32-250`. The\n default value if unspecified is `110`. Once the cluster is created, this\n value cannot be updated.\n\n Pod density is also limited by your cluster's available IP resources. For\n details, see\n [Pod networking](/anthos/clusters/docs/bare-metal/1.16/concepts/network-reqs#pod_networking).\n\nCreate the user cluster\n-----------------------\n\nIssue the `bmctl` command to apply the user cluster config and create the\ncluster: \n\n bmctl create cluster -c \u003cvar translate=\"no\"\u003eUSER_CLUSTER_NAME\u003c/var\u003e --kubeconfig \u003cvar translate=\"no\"\u003eADMIN_KUBECONFIG\u003c/var\u003e\n\nReplace the following:\n\n- \u003cvar translate=\"no\"\u003e\u003ccode translate=\"no\" dir=\"ltr\"\u003eUSER_CLUSTER_NAME\u003c/code\u003e\u003c/var\u003e: the cluster name created in the previous section.\n- \u003cvar translate=\"no\"\u003e\u003ccode translate=\"no\" dir=\"ltr\"\u003eADMIN_KUBECONFIG\u003c/code\u003e\u003c/var\u003e: the path to the admin cluster kubeconfig file.\n\nFor example, for a user cluster named `user1`, and an admin cluster kubeconfig\nfile with the path `kubeconfig bmctl-workspace/admin/admin-kubeconfig`, the\ncommand would be: \n\n bmctl create cluster -c user1 --kubeconfig bmctl-workspace/admin/admin-kubeconfig\n\nSample user cluster configurations\n----------------------------------\n\nFor example user cluster configurations, see\n[User clusters](/anthos/clusters/docs/bare-metal/1.16/reference/config-samples#user_clusters) in the\nCluster configuration samples."]]