在 Anthos clusters on Bare Metal 中,您可以设置管理员集群来安全地管理其他集群。您可以通过管理员集群创建、更新、升级或删除用户集群。用户集群独立于管理运行工作负载,因此敏感信息受到保护。
管理多集群工作负载的管理员集群可以提供可用性高 (HA) 的可靠性。在高可用性集群中,如果一个控制层面节点发生故障,其他节点将继续工作。
多集群环境中的管理员集群可提供最佳的基本安全性。由于管理数据的访问权限与工作负载无关,因此访问用户工作负载的人员无权访问 SSH 密钥和服务帐号数据等敏感的管理数据。因此,需要在安全性与所需的资源之间进行权衡取舍,因为单独的管理员集群意味着您需要将专用资源用于管理和工作负载。
您可以使用 bmctl
命令创建管理员集群。创建管理员集群后,需要创建用户集群来运行工作负载。
预备知识:
- 从
gs://anthos-baremetal-release/bmctl/1.6.2/linux-amd64/bmctl
下载bmctl
- 运行 bmctl 的工作站与目标用户集群中的所有节点之间有网络连接。
- 运行 bmctl 的工作站与集群 API 服务器(控制层面 VIP)之间有网络连接
- 用于创建管理员集群的 SSH 密钥应作为 root 提供,或者您应具有目标管理员集群中所有节点的 SUDO 用户访问权限。
如需查看创建混合集群的分步说明,请参阅 Anthos clusters on Bare Metal 快速入门。创建管理员集群与创建混合集群类似,不同之处在于管理员集群不运行工作负载。
登录 gcloud 并创建管理员集群配置文件
- 使用
gcloud auth application-default
login 以用户身份登录 gcloud: - Service Account Admin
- Service Account Key Admin
- Project IAM Admin
- Compute Viewer
- Service Usage Admin
- 获取要用于创建集群的 Cloud 项目 ID:
gcloud auth application-default login您需要具有 Project Owner/Editor 角色才能使用自动 API 启用和服务帐号创建功能,具体如下所述。您还可以向用户添加以下 IAM 角色:
export GOOGLE_APPLICATION_CREDENTIALS=JSON_KEY_FILEJSON_KEY_FILE 指定服务帐号 JSON 密钥文件的路径。
export CLOUD_PROJECT_ID=$(gcloud config get-value project)
使用 bmctl
创建管理员集群配置
登录 gcloud 并设置项目后,您可以使用 bmctl
命令创建集群配置文件。请注意,在此示例中,所有服务帐号都由 bmctl create config
命令自动创建:
bmctl create config -c ADMIN_CLUSTER_NAME --enable-apis \ --create-service-accounts --project-id=CLOUD_PROJECT_ID
ADMIN_CLUSTER_NAME 是集群的名称,CLOUD_PROJECT_ID 是您的项目 ID。
以下示例展示了如何为与项目 ID my-gcp-project
关联且名为 admin1
的管理员集群创建配置文件:
bmctl create config -c admin1 --create-service-accounts --enable-apis --project-id=my-gcp-project
该文件将写入 bmctl-workspace/admin1/admin1.yaml.
除了自动启用 API 和创建服务帐号外,您还可以为现有服务帐号提供适当的 IAM 权限。也就是说,您可以在上一步中跳过 bmctl
命令中的自动服务帐号创建部分:
bmctl create config -c admin1
修改集群配置文件
现在您有了集群配置文件,请对其进行修改以做出以下更改:
- 提供 SSH 私钥以访问管理员集群节点:
- 检查配置,确保指定
admin
的集群类型(默认值): - 更改配置文件以指定多节点、高可用性的控制层面。指定奇数个节点,以通过多数仲裁实现高可用性:
# bmctl configuration variables. Because this section is valid YAML but not a valid Kubernetes # resource, this section can only be included when using bmctl to # create the initial admin/admin cluster. Afterwards, when creating user clusters by directly # applying the cluster and node pool resources to the existing cluster, you must remove this # section. gcrKeyPath: /bmctl/bmctl-workspace/.sa-keys/my-gcp-project-anthos-baremetal-gcr.json sshPrivateKeyPath: /path/to/your/ssh_private_key gkeConnectAgentServiceAccountKeyPath: /bmctl/bmctl-workspace/.sa-keys/my-gcp-project-anthos-baremetal-connect.json gkeConnectRegisterServiceAccountKeyPath: /bmctl/bmctl-workspace/.sa-keys/my-gcp-project-anthos-baremetal-register.json cloudOperationsServiceAccountKeyPath: /bmctl/bmctl-workspace/.sa-keys/my-gcp-project-anthos-baremetal-cloud-ops.json
spec: # Cluster type. This can be: # 1) admin: to create an admin cluster. This can later be used to create user clusters. # 2) user: to create a user cluster. Requires an existing admin cluster. # 3) hybrid: to create a hybrid cluster that runs admin cluster components and user workloads. # 4) standalone: to create a cluster that manages itself, runs user workloads, but does not manage other clusters. type: admin
# Control plane configuration controlPlane: nodePoolSpec: nodes: # Control plane node pools. Typically, this is either a single machine # or 3 machines if using a high availability deployment. - address: 10.200.0.4 - address: 10.200.0.5 - address: 10.200.0.6
使用集群配置创建管理员集群
使用 bmctl
命令部署集群:
bmctl create cluster -c ADMIN_CLUSTER_NAME
ADMIN_CLUSTER_NAME 指定在上一部分中创建的集群名称。
以下示例命令创建名为 admin1
的集群:
bmctl create cluster -c admin1
完整的管理员集群配置示例
以下是使用 bmctl
命令创建的管理员集群配置文件示例。请注意,此示例配置中使用了占位符集群名称、VIP 和地址。这些信息可能不适用于您的网络。
gcrKeyPath: /bmctl/bmctl-workspace/.sa-keys/my-gcp-project-anthos-baremetal-gcr.json sshPrivateKeyPath: /bmctl/bmctl-workspace/.ssh/id_rsa gkeConnectAgentServiceAccountKeyPath: /bmctl/bmctl-workspace/.sa-keys/my-gcp-project-anthos-baremetal-connect.json gkeConnectRegisterServiceAccountKeyPath: /bmctl/bmctl-workspace/.sa-keys/my-gcp-project-anthos-baremetal-register.json cloudOperationsServiceAccountKeyPath: /bmctl/bmctl-workspace/.sa-keys/my-gcp-project-anthos-baremetal-cloud-ops.json --- apiVersion: v1 kind: Namespace metadata: name: cluster-admin1 --- apiVersion: baremetal.cluster.gke.io/v1 kind: Cluster metadata: name: admin1 namespace: cluster-admin1 spec: # Cluster type. This can be: # 1) admin: to create an admin cluster. This can later be used to create user clusters. # 2) user: to create a user cluster. Requires an existing admin cluster. # 3) hybrid: to create a hybrid cluster that runs admin cluster components and user workloads. # 4) standalone: to create a cluster that manages itself, runs user workloads, but does not manage other clusters. type: admin # Anthos cluster version. anthosBareMetalVersion: v1.6.2 # GKE connect configuration gkeConnect: projectID: $GOOGLE_PROJECT_ID # Control plane configuration controlPlane: nodePoolSpec: nodes: # Control plane node pools. Typically, this is either a single machine # or 3 machines if using a high availability deployment. - address: 10.200.0.4 - address: 10.200.0.5 - address: 10.200.0.6 # Cluster networking configuration clusterNetwork: # Pods specify the IP ranges from which Pod networks are allocated. pods: cidrBlocks: - 192.168.0.0/16 # Services specify the network ranges from which service VIPs are allocated. # This can be any RFC 1918 range that does not conflict with any other IP range # in the cluster and node pool resources. services: cidrBlocks: - 10.96.0.0/12 # Load balancer configuration loadBalancer: # Load balancer mode can be either 'bundled' or 'manual'. # In 'bundled' mode a load balancer will be installed on load balancer nodes during cluster creation. # In 'manual' mode the cluster relies on a manually-configured external load balancer. mode: bundled # Load balancer port configuration ports: # Specifies the port the LB serves the kubernetes control plane on. # In 'manual' mode the external load balancer must be listening on this port. controlPlaneLBPort: 443 # There are two load balancer VIPs: one for the control plane and one for the L7 Ingress # service. The VIPs must be in the same subnet as the load balancer nodes. vips: # ControlPlaneVIP specifies the VIP to connect to the Kubernetes API server. # This address must not be in the address pools below. controlPlaneVIP: 10.200.0.71 # IngressVIP specifies the VIP shared by all services for ingress traffic. # Allowed only in non-admin clusters. # This address must be in the address pools below. # ingressVIP: 10.0.0.2 # AddressPools is a list of non-overlapping IP ranges for the data plane load balancer. # All addresses must be in the same subnet as the load balancer nodes. # Address pool configuration is only valid for 'bundled' LB mode in non-admin clusters. # addressPools: # - name: pool1 # addresses: # # Each address must be either in the CIDR form (1.2.3.0/24) # # or range form (1.2.3.1-1.2.3.5). # - 10.0.0.1-10.0.0.4 # A load balancer nodepool can be configured to specify nodes used for load balancing. # These nodes are part of the kubernetes cluster and run regular workloads as well as load balancers. # If the node pool config is absent then the control plane nodes are used. # Node pool configuration is only valid for 'bundled' LB mode. # nodePoolSpec: # nodes: # - address: <Machine 1 IP> # Proxy configuration # proxy: # url: http://[username:password@]domain # # A list of IPs, hostnames or domains that should not be proxied. # noProxy: # - 127.0.0.1 # - localhost # Logging and Monitoring clusterOperations: # Cloud project for logs and metrics. projectID: <Google Project ID>$GOOGLE_PROJECT_ID # Cloud location for logs and metrics. location: us-central1 # Whether collection of application logs/metrics should be enabled (in addition to # collection of system logs/metrics which correspond to system components such as # Kubernetes control plane or cluster management agents). # enableApplication: false # Storage configuration storage: # lvpNodeMounts specifies the config for local PersistentVolumes backed by mounted disks. # These disks need to be formatted and mounted by the user, which can be done before or after # cluster creation. lvpNodeMounts: # path specifies the host machine path where mounted disks will be discovered and a local PV # will be created for each mount. path: /mnt/localpv-disk # storageClassName specifies the StorageClass that PVs will be created with. The StorageClass # is created during cluster creation. storageClassName: local-disks # lvpShare specifies the config for local PersistentVolumes backed by subdirectories in a shared filesystem. # These subdirectories are automatically created during cluster creation. lvpShare: # path specifies the host machine path where subdirectories will be created on each host. A local PV # will be created for each subdirectory. path: /mnt/localpv-share # storageClassName specifies the StorageClass that PVs will be created with. The StorageClass # is created during cluster creation. storageClassName: local-shared # numPVUnderSharedPath specifies the number of subdirectories to create under path. numPVUnderSharedPath: 5 # Authentication; uncomment this section if you wish to enable authentication to the cluster with OpenID Connect. # authentication: # oidc: # # issuerURL specifies the URL of your OpenID provider, such as "https://accounts.google.com". The Kubernetes API # # server uses this URL to discover public keys for verifying tokens. Must use HTTPS. # issuerURL: <URL for OIDC Provider; required> # # clientID specifies the ID for the client application that makes authentication requests to the OpenID # # provider. # clientID: <ID for OIDC client application; required> # # clientSecret specifies the secret for the client application. # clientSecret: <Secret for OIDC client application; optional> # # kubectlRedirectURL specifies the redirect URL (required) for the gcloud CLI, such as # # "http://localhost:[PORT]/callback". # kubectlRedirectURL: <Redirect URL for the gcloud CLI; optional default is "http://kubectl.redirect.invalid" # # username specifies the JWT claim to use as the username. The default is "sub", which is expected to be a # # unique identifier of the end user. # username: <JWT claim to use as the username; optional, default is "sub"> # # usernamePrefix specifies the prefix prepended to username claims to prevent clashes with existing names. # usernamePrefix: <Prefix prepended to username claims; optional> # # group specifies the JWT claim that the provider will use to return your security groups. # group: <JWT claim to use as the group name; optional> # # groupPrefix specifies the prefix prepended to group claims to prevent clashes with existing names. # groupPrefix: <Prefix prepended to group claims; optional> # # scopes specifies additional scopes to send to the OpenID provider as a comma-delimited list. # scopes: Additional scopes to send to OIDC provider as a comma-separated list; optional> # # extraParams specifies additional key-value parameters to send to the OpenID provider as a comma-delimited # # list. # extraParams: Additional key-value parameters to send to OIDC provider as a comma-separated list; optional> # # certificateAuthorityData specifies a Base64 PEM-encoded certificate authority certificate of your identity # # provider. It's not needed if your identity provider's certificate was issued by a well-known public CA. # certificateAuthorityData: Base64 PEM-encoded certificate authority certificate of your OIDC provider; optional> # Node access configuration; uncomment this section if you wish to use a non-root user # with passwordless sudo capability for machine login. # nodeAccess: # loginUser: login user name