在多集群设置中创建用户集群

在裸机上的 Anthos 集群中,用户集群运行您的工作负载,而在多集群架构中,用户集群由管理员集群创建和管理。

创建管理员集群后,调用 bmctl create config 命令可创建一个 yaml 文件,您可以修改该文件以定义用户集群。然后,您可以使用 kubectl 命令应用该配置并创建用户集群。

从管理员集群卸载工作负载可以保护敏感的管理数据(例如存储在管理员集群中的 SSH 密钥),避免无需访问这些信息的人员访问它。此外,让用户集群彼此隔离有助于为您的工作负载提供良好的通用安全性。

前提条件

  • gs://anthos-baremetal-release/bmctl/1.6.2/linux-amd64/bmctl 下载的 bmctl
  • 通过对与集群 API 服务器 (controlPlaneVIP) 的访问权限使用管理员集群。
  • 管理员集群节点需要与目标用户集群上的所有节点有网络连接
  • 用于创建用户集群的 SSH 密钥可作为 root 或 SUDO 用户在用户集群中的所有节点上提供

创建用户集群配置文件

用于创建用户集群的配置文件与用于创建管理员集群的配置文件几乎完全相同。唯一的区别是需移除本地凭据配置部分,使配置成为有效的 Kubernetes 资源集合。配置部分位于文件顶部的 bmctl configuration variables 部分下方。

默认情况下,用户集群会从管理它们的管理员集群继承其凭据。您可以有选择地替换这些凭据的部分或全部。如需了解详情,请参阅用户集群配置文件示例

在下面的步骤中,请仔细检查您对配置文件的修改。由于您要使用 kubectl 命令创建用户集群,因此对用户集群配置的预检检查存在限制。

  1. 使用 bmctl create config 命令创建用户集群配置文件:

    bmctl create config -c USER_CLUSTER_NAME
    

    例如,发出以下命令可为名为 user1 的用户集群创建配置文件:

    bmctl create config -c user1
    

    该文件将写入 bmctl-workspace/user1/user1.yaml。该文件的通用路径为 bmctl-workspace/CLUSTER NAME/CLUSTER_NAME.yaml

  2. 修改配置文件,并进行以下更改:

    • 从配置中移除本地凭据文件路径。用户集群不需要本地凭据文件路径,并且它们不适用于 kubectl 命令。移除以下内容:
      ....
        gcrKeyPath: {path to GCR service account key}
        sshPrivateKeyPath: (path to SSH private key, used for node access)
        gkeConnectAgentServiceAccountKeyPath: (path to Connect agent service account key)
        gkeConnectRegisterServiceAccountKeyPath: (path to Hub registration service account key)
        cloudOperationsServiceAccountKeyPath: (path to Cloud Operations service account key)
      ....
            
    • 更改配置以指定 user 集群类型而不是 admin
      ....
      spec:
        # Cluster type. This can be:
        #   1) admin:  to create an admin cluster. This can later be used to create
        #   user clusters.
        #   2) user:   to create a user cluster. Requires an existing admin cluster.
        #   3) hybrid: to create a hybrid cluster that runs admin cluster
        #   components and user workloads.
        #   4) standalone: to create a cluster that manages itself, runs user
        #   workloads, but does not manage other clusters.
        type: user
      ....
      
    • 确保负载平衡器 VIP 和地址池的管理员集群和用户集群规范互为补充,并且会与现有集群重叠。下面显示了一对管理员集群和用户集群配置的示例,其中指定了负载平衡和地址池:
    • ....
      # Sample admin cluster config for load balancer and address pools
        loadBalancer:
          vips:
            controlPlaneVIP: 10.200.0.49
            ingressVIP: 10.200.0.50
          addressPools:
          - name: pool1
            addresses:
            - 10.200.0.50-10.200.0.70
      ....
      ....
      # Sample user cluster config for load balancer and address pools
      loadBalancer:
          vips:
            controlPlaneVIP: 10.200.0.71
            ingressVIP: 10.200.0.72
          addressPools:
          - name: pool1
            addresses:
            - 10.200.0.72-10.200.0.90
      ....
      

      请注意,其余用户集群配置文件与管理员集群配置相同。

  3. 仔细检查您的用户集群配置文件。创建用户集群时,Anthos clusters on Bare Metal 预检检查存在限制,它们仅涵盖机器级层检查(例如操作系统版本、有冲突的软件版本和可用资源)。

创建用户集群

发出 kubectl 命令应用用户集群配置并创建集群:

kubectl --kubeconfig ADMIN_KUBECONFIG apply -f USER_CLUSTER_CONFIG

ADMIN_KUBECONFIG 指定管理员集群 kubeconfig 文件的路径,USER_CLUSTER_CONFIG 指定您在上一部分中修改的用户集群 yaml 文件的路径。

例如,对于名为 admin 的管理员集群和名为 user1 的用户集群配置,命令将如下所示:

kubectl --kubeconfig bmctl-workspace/admin/admin-kubeconfig apply /
  -f bmctl-workspace/user1/user1.yaml

等待用户集群准备就绪

如需验证用户集群是否已准备就绪,请使用 kubectl wait 命令测试条件。以下命令会等待 30 分钟来检查集群状态是否已完成调整,并提取已创建的用户集群 kubeconfig 文件:

kubectl --kubeconfig ADMIN_KUBECONFIG wait \
  cluster USER_CLUSTER_NAME -n cluster-USER_CLUSTER_NAME \
  --for=condition=Reconciling=False --timeout=30m && \
  kubectl --kubeconfig ADMIN_KUBECONFIG get secret USER_CLUSTER_NAME-kubeconfig \
  -n cluster-USER_CLUSTER_NAME \
  -o 'jsonpath={.data.value}' | base64 -d > USER_KUBECONFIG

其中:

  • ADMIN_KUBECONFIG 指定管理员集群 kubeconfig 文件的路径。
  • USER_KUBECONFIG 指定要创建的用户 kubeconfig 文件的路径。
  • USER_CLUSTER_NAME 是您的用户集群的名称。

例如,对于名为 admin 的管理员集群和名为 user1 的用户集群配置,命令将如下所示:

kubectl --kubeconfig bmctl-workspace/admin/admin-kubeconfig wait  \
  cluster user1 -n cluster-user1 --for=condition=Reconciling=False --timeout=30m &&  \
  kubectl --kubeconfig bmctl-workspace/admin/admin-kubeconfig get secret  \
  user1-kubeconfig -n cluster-user1 -o 'jsonpath={.data.value}' | base64  \
  -d > bmctl-workspace/user1/user1-kubeconfig

等待工作器节点池准备就绪

在大多数情况下,您还需要等待工作器节点池准备就绪。通常情况下,您需要将工作器节点池用于用户集群来运行工作负载。

如需验证工作器节点池是否已准备就绪,请使用 kubectl wait 命令测试条件。在这种情况下,该命令会再次等待 30 分钟以使节点池准备就绪:

kubectl --kubeconfig ADMIN_KUBECONFIG wait cluster USER_CLUSTER_NAME \
  -n cluster-USER_CLUSTER_NAME --for=condition=Reconciling=False --timeout=30m && \
  kubectl --kubeconfig ADMIN_KUBECONFIG wait nodepool NODE_POOL_NAME \
  -n cluster-USER_CLUSTER_NAME --for=condition=Reconciling=False --timeout=30m && \
  kubectl --kubeconfig ADMIN_KUBECONFIG get secret \
  USER_CLUSTER_NAME-kubeconfig -n cluster-USER_CLUSTER_NAME -o \
  'jsonpath={.data.value}' | base64 -d >  USER_KUBECONFIG
  

NODE_POOL_NAME 指定使用用户集群创建的节点池。

例如,对于名为 admin 的管理员集群、名为 user1 的用户集群配置和名为 node-pool-1 的节点池,命令将如下所示:

kubectl --kubeconfig bmctl-workspace/admin/admin-kubeconfig wait \
  cluster user1 -n cluster-user1 --for=condition=Reconciling=False --timeout=30m && \
  kubectl --kubeconfig bmctl-workspace/admin/admin-kubeconfig wait \
  nodepool node-pool-1 -n cluster-user1 --for=condition=Reconciling=False --timeout=30m && \
  kubectl --kubeconfig bmctl-workspace/admin/admin-kubeconfig get secret \
  user1-kubeconfig -n cluster-user1 -o \
  'jsonpath={.data.value}' | base64 -d > bmctl-workspace/user1/user1-kubeconfig

完整的用户集群配置示例

以下是使用 bmctl 命令创建的用户集群配置文件示例。请注意,此示例配置中使用了占位符集群名称、VIP 和地址。这些信息可能不适用于您的网络。

# Sample user cluster config:

apiVersion: v1
kind: Namespace
metadata:
  name: cluster-user1
---
apiVersion: baremetal.cluster.gke.io/v1
kind: Cluster
metadata:
  name: user1
  namespace: cluster-user1
spec:
  # Cluster type. This can be:
  #   1) admin:  to create an admin cluster. This can later be used to create user clusters.
  #   2) user:   to create a user cluster. Requires an existing admin cluster.
  #   3) hybrid: to create a hybrid cluster that runs admin cluster components and user workloads.
  #   4) standalone: to create a cluster that manages itself, runs user workloads,
  #   but does not manage other clusters.
  type: user
  # Anthos cluster version.
  anthosBareMetalVersion: 1.6.2
  # GKE connect configuration
  gkeConnect:
    projectID: GOOGLE_PROJECT_ID
    # To override default credentials inherited from the admin cluster:
    # 1. Create a new secret in the admin cluster
    # 2. Uncomment the section below and refer to the secret created above
    # # Optionally override default secret inherited from the admin cluster.
    # connectServiceAccountSecret:
    #  name: GKE_CONNECT_AGENT_SA_SECRET
    #  namespace: cluster-user1
    # # Optionally override default secret inherited from the admin cluster.
    # registerServiceAccountSecret:
    #  name: GKE_CONNECT_REGISTER_SA_SECRET
    #  namespace: cluster-user1
  # Control plane configuration
  controlPlane:
    nodePoolSpec:
      nodes:
      # Control plane node pools. Typically, this is either a single machine
      # or 3 machines if using a high availability deployment.
      - address: 10.200.0.4
  # Cluster networking configuration
  clusterNetwork:
    # Pods specify the IP ranges from which Pod networks are allocated.
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    # Services specify the network ranges from which service VIPs are allocated.
    # This can be any RFC 1918 range that does not conflict with any other IP range
    # in the cluster and node pool resources.
    services:
      cidrBlocks:
      - 10.96.0.0/12
  # Credentials specify the secrets that hold SSH key and image pull credential for the new cluster.
  # credentials:
  #  # Optionally override default ssh key secret inherited from the admin cluster.
  #  sshKeySecret:
  #    name: SSH_KEY_SECRET
  #    namespace: cluster-user1
  #  # Optionally override default image pull secret inherited from the admin cluster.
  #  imagePullSecret:
  #    name: IMAGE_PULL_SECRET
  #    namespace: cluster-user1
  # Load balancer configuration
  loadBalancer:
    # Load balancer mode can be either 'bundled' or 'manual'.
    # In 'bundled' mode a load balancer will be installed on load balancer nodes during cluster creation.
    # In 'manual' mode the cluster relies on a manually-configured external load balancer.
    mode: bundled
    # Load balancer port configuration
    ports:
      # Specifies the port the LB serves the kubernetes control plane on.
      # In 'manual' mode the external load balancer must be listening on this port.
      controlPlaneLBPort: 443
    # There are two load balancer VIPs: one for the control plane and one for the L7 Ingress
    # service. The VIPs must be in the same subnet as the load balancer nodes.
    vips:
      # ControlPlaneVIP specifies the VIP to connect to the Kubernetes API server.
      # This address must not be in the address pools below.
      controlPlaneVIP: 10.200.0.71
      # IngressVIP specifies the VIP shared by all services for ingress traffic.
      # Allowed only in non-admin clusters.
      # This address must be in the address pools below.
      ingressVIP: 10.200.0.72
    # AddressPools is a list of non-overlapping IP ranges for the data plane load balancer.
    # All addresses must be in the same subnet as the load balancer nodes.
    # Address pool configuration is only valid for 'bundled' LB mode in non-admin clusters.
    addressPools:
    - name: pool1
      addresses:
      # Each address must be either in the CIDR form (1.2.3.0/24)
      # or range form (1.2.3.1-1.2.3.5).
      - 10.200.0.72-10.200.0.90
    # A load balancer nodepool can be configured to specify nodes used for load balancing.
    # These nodes are part of the kubernetes cluster and run regular workloads as well as load balancers.
    # If the node pool config is absent then the control plane nodes are used.
    # Node pool configuration is only valid for 'bundled' LB mode.
    # nodePoolSpec:
    #  nodes:
    #  - address: 
  # Proxy configuration
  # proxy:
  #   url: http://[username:password@]domain
  #   # A list of IPs, hostnames or domains that should not be proxied.
  #   noProxy:
  #   - 127.0.0.1
  #   - localhost
  # Logging and Monitoring
  clusterOperations:
    # Cloud project for logs and metrics.
    projectID: $GOOGLE_PROJECT_ID
    # Cloud location for logs and metrics.
    location: us-central1
    # Optionally override default secret inherited from the admin cluster.
    # serviceAccountSecret:
    #  name: $CLOUD_OPERATIONS_SA_SECRET
    #  namespace: cluster-user1
    # Whether collection of application logs/metrics should be enabled (in addition to
    # collection of system logs/metrics which correspond to system components such as
    # Kubernetes control plane or cluster management agents).
    # enableApplication: false
  # Storage configuration
  storage:
    # lvpNodeMounts specifies the config for local PersistentVolumes backed by mounted disks.
    # These disks need to be formatted and mounted by the user, which can be done before or after
    # cluster creation.
    lvpNodeMounts:
      # path specifies the host machine path where mounted disks will be discovered and a local PV
      # will be created for each mount.
      path: /mnt/localpv-disk
      # storageClassName specifies the StorageClass that PVs will be created with. The StorageClass
      # is created during cluster creation.
      storageClassName: local-disks
    # lvpShare specifies the config for local PersistentVolumes backed by subdirectories in a shared filesystem.
    # These subdirectories are automatically created during cluster creation.
    lvpShare:
      # path specifies the host machine path where subdirectories will be created on each host. A local PV
      # will be created for each subdirectory.
      path: /mnt/localpv-share
      # storageClassName specifies the StorageClass that PVs will be created with. The StorageClass
      # is created during cluster creation.
      storageClassName: local-shared
      # numPVUnderSharedPath specifies the number of subdirectories to create under path.
      numPVUnderSharedPath: 5
  # Authentication; uncomment this section if you wish to enable authentication to the cluster with OpenID Connect.
  # authentication:
  #   oidc:
  #     # issuerURL specifies the URL of your OpenID provider, such as "https://accounts.google.com". The Kubernetes API
  #     # server uses this URL to discover public keys for verifying tokens. Must use HTTPS.
  #     issuerURL: 
  #     # clientID specifies the ID for the client application that makes authentication requests to the OpenID
  #     # provider.
  #     clientID: 
  #     # clientSecret specifies the secret for the client application.
  #     clientSecret: 
  #     # kubectlRedirectURL specifies the redirect URL (required) for the gcloud CLI, such as
  #     # "http://localhost:[PORT]/callback".
  #     kubectlRedirectURL: <Redirect URL for the gcloud CLI; optional, default is "http://kubectl.redirect.invalid">
  #     # username specifies the JWT claim to use as the username. The default is "sub", which is expected to be a
  #     # unique identifier of the end user.
  #     username: <JWT claim to use as the username; optional, default is "sub">
  #     # usernamePrefix specifies the prefix prepended to username claims to prevent clashes with existing names.
  #     usernamePrefix: 

  #     # group specifies the JWT claim that the provider will use to return your security groups.
  #     group: 
  #     # groupPrefix specifies the prefix prepended to group claims to prevent clashes with existing names.
  #     groupPrefix: 

  #     # scopes specifies additional scopes to send to the OpenID provider as a comma-delimited list.
  #     scopes: 
  #     # extraParams specifies additional key-value parameters to send to the OpenID provider as a comma-delimited
  #     # list.
  #     extraParams: 
  #     # certificateAuthorityData specifies a Base64 PEM-encoded certificate authority certificate of your identity
  #     # provider. It's not needed if your identity provider's certificate was issued by a well-known public CA.
  #     certificateAuthorityData: 
  # Node access configuration; uncomment this section if you wish to use a non-root user
  # with passwordless sudo capability for machine login.
  # nodeAccess:
  #   loginUser: 
---
# Node pools for worker nodes
apiVersion: baremetal.cluster.gke.io/v1
kind: NodePool
metadata:
  name: node-pool-1
  namespace: cluster-user1
spec:
  clusterName: user1
  nodes:
  - address: 10.200.0.5