Como criar clusters autônomos

No Anthos em Bare Metal, os clusters autônomos executam cargas de trabalho e se autogerenciam, mas não podem gerenciar outros. Os clusters autônomos eliminam a necessidade de executar um cluster de administrador separado em cenários com recursos restritos.

Ao criar clusters autônomos, há certa compensação entre a redução de recursos e a segurança geral. Como os clusters autônomos se autogerenciam, executar cargas de trabalho no mesmo cluster aumenta o risco de exposição à segurança de dados administrativos confidenciais, como chaves SSH.

Crie um cluster autônomo com um único plano de controle usando o comando bmctl. O comando bmctl pode ser executado em uma estação de trabalho separada ou em um dos nós de cluster autônomo. Observe que, embora use recursos reduzidos, essa configuração não fornece alta disponibilidade (HA, na sigla em inglês), e o cluster resultante tem um único ponto de falha.

Também é possível criar um cluster de alta disponibilidade no modo autônomo. Em um cluster autônomo de HA, se um nó falhar, outros assumirão seu lugar. Especifique mais de um nó para que o plano de controle crie um cluster de alta disponibilidade.

Pré-requisitos:

  • bmctl transferido de gs://anthos-baremetal-release/bmctl/1.6.2/linux-amd64/bmctl
  • A estação de trabalho que executa bmctl precisa ter conectividade de rede com todos os nós no cluster autônomo de destino.
  • A estação de trabalho que executa bmctl precisa ter conectividade de rede com o VIP do plano de controle do cluster autônomo de destino.
  • A chave SSH usada para criar o cluster autônomo deve estar disponível como um usuário raiz ou SUDO em todos os nós do cluster autônomo de destino.

Como fazer login no gcloud e criar um arquivo de configuração de cluster autônomo

  1. Faça login no gcloud como um usuário com o login gcloud auth application-default:
  2. gcloud auth application-default login
    
    Você precisa ter um papel de proprietário/editor do projeto para usar os recursos automáticos de ativação da API e de criação de contas de serviço, descritos abaixo. Também é possível adicionar os seguintes papéis de IAM ao usuário:
    • Administrador da conta de serviço
    • Administrador da chave da conta de serviço
    • Administrador de projetos do IAM
    • Leitor do Compute
    • Administrador do Service Usage
    Como alternativa, se você já tiver uma conta de serviço com esses papéis, execute:
    export GOOGLE_APPLICATION_CREDENTIALS=JSON_KEY_FILE
    
    JSON_KEY_FILE especifica o caminho para o arquivo de chave JSON da sua conta de serviço.
  3. Receba o ID do projeto do Cloud para usar com a criação do cluster:
  4. export CLOUD_PROJECT_ID=$(gcloud config get-value project)
    

Como criar o arquivo de configuração de cluster autônomo com bmctl

Depois de fazer login no gcloud e configurar o projeto, será possível criar o arquivo de configuração do cluster com o comando bmctl. Observe que, neste exemplo, todas as contas de serviço são criadas automaticamente pelo comando bmctl create config:

bmctl create config -c STANDALONE_CLUSTER_NAME --enable-apis \
    --create-service-accounts --project-id=CLOUD_PROJECT_ID

Veja a seguir um exemplo de criação de um arquivo de configuração para um cluster autônomo chamado standalone1 associado ao ID do projeto my-gcp-project:

bmctl create config -c standalone1 --create-service-accounts --project-id=my-gcp-project

O arquivo é gravado em bmctl-workspace/standalone1/standalone1.yaml.

Outra opção para ativar automaticamente APIs e criar contas de serviço é fornecer as contas de serviço atuais com permissões adequadas do IAM. Isso significa que é possível ignorar a criação automática de conta de serviço na etapa anterior no comando bmctl:

bmctl create config -c standalone1

Editar o arquivo de configuração do cluster

Agora que você tem um arquivo de configuração de cluster, edite-o para fazer as alterações a seguir:

  1. Forneça a chave privada SSH para acessar os nós do cluster autônomo:
  2. # bmctl configuration variables. Because this section is valid YAML but not a valid Kubernetes
    # resource, this section can only be included when using bmctl to
    # create the initial admin/hybrid cluster. Afterwards, when creating user clusters by directly
    # applying the cluster and node pool resources to the existing cluster, you must remove this
    # section.
    gcrKeyPath:
    /bmctl/bmctl-workspace/.sa-keys/my-gcp-project-anthos-baremetal-gcr.json
    sshPrivateKeyPath: /path/to/your/ssh_private_key
    gkeConnectAgentServiceAccountKeyPath: /bmctl/bmctl-workspace/.sa-keys/my-gcp-project-anthos-baremetal-connect.json
    gkeConnectRegisterServiceAccountKeyPath: /bmctl/bmctl-workspace/.sa-keys/my-gcp-project-anthos-baremetal-register.json
    cloudOperationsServiceAccountKeyPath: /bmctl/bmctl-workspace/.sa-keys/my-gcp-project-anthos-baremetal-cloud-ops.json
  3. Altere a configuração para especificar um tipo de cluster standalone em vez de admin:
  4. spec:
      # Cluster type. This can be:
      #   1) admin:  to create an admin cluster. This can later be used to create user clusters.
      #   2) user:   to create a user cluster. Requires an existing admin cluster.
      #   3) hybrid: to create a hybrid cluster that runs admin cluster components and user workloads.
      #   4) standalone: to create a cluster that manages itself, runs user workloads, but does not manage other clusters.
      type: standalone
  5. Opcional: altere a configuração para especificar um plano de controle com vários nós e de alta disponibilidade. Especifique um número ímpar de nós para poder ter uma maioria de quórum para HA:
  6.   # Control plane configuration
      controlPlane:
        nodePoolSpec:
          nodes:
          # Control plane node pools. Typically, this is either a single machine
          # or 3 machines if using a high availability deployment.
          - address: 10.200.0.4
          - address: 10.200.0.5
          - address: 10.200.0.6

Criar o cluster autônomo com a configuração do cluster

Use o comando bmctl para implantar o cluster autônomo:

bmctl create cluster -c CLUSTER_NAME

CLUSTER_NAME especifica o nome do cluster criado na seção anterior.

Veja a seguir um exemplo do comando para criar um cluster chamado standalone1:

bmctl create cluster -c standalone1

Exemplo da configuração completa do cluster autônomo

A seguir, um exemplo de arquivo de configuração de cluster autônomo criado pelo comando bmctl. Observe que, nesta configuração de amostra, são usados nomes de cluster de marcador, VIPs e endereços. Eles podem não funcionar na sua rede.

gcrKeyPath: /bmctl/bmctl-workspace/.sa-keys/my-gcp-project-anthos-baremetal-gcr.json
sshPrivateKeyPath: /bmctl/bmctl-workspace/.ssh/id_rsa
gkeConnectAgentServiceAccountKeyPath: /bmctl/bmctl-workspace/.sa-keys/my-gcp-project-anthos-baremetal-connect.json
gkeConnectRegisterServiceAccountKeyPath: /bmctl/bmctl-workspace/.sa-keys/my-gcp-project-anthos-baremetal-register.json
cloudOperationsServiceAccountKeyPath: /bmctl/bmctl-workspace/.sa-keys/my-gcp-project-anthos-baremetal-cloud-ops.json
---
apiVersion: v1
kind: Namespace
metadata:
  name: cluster-standalone1
---
apiVersion: baremetal.cluster.gke.io/v1
kind: Cluster
metadata:
  name: standalone1
  namespace: cluster-standalone1
spec:
  # Cluster type. This can be:
  #   1) admin:  to create an admin cluster. This can later be used to create user clusters.
  #   2) user:   to create a user cluster. Requires an existing admin cluster.
  #   3) hybrid: to create a hybrid cluster that runs admin cluster components and user workloads.
  #   4) standalone: to create a cluster that manages itself, runs user workloads, but does not manage other clusters.
  type: standalone
  # Anthos cluster version.
  anthosBareMetalVersion: 1.6.2
  # GKE connect configuration
  gkeConnect:
    projectID: $GOOGLE_PROJECT_ID
  # Control plane configuration
  controlPlane:
    nodePoolSpec:
      nodes:
      # Control plane node pools. Typically, this is either a single machine
      # or 3 machines if using a high availability deployment.
      - address: 10.200.0.4
  # Cluster networking configuration
  clusterNetwork:
    # Pods specify the IP ranges from which Pod networks are allocated.
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    # Services specify the network ranges from which service VIPs are allocated.
    # This can be any RFC 1918 range that does not conflict with any other IP range
    # in the cluster and node pool resources.
    services:
      cidrBlocks:
      - 10.96.0.0/12
  # Load balancer configuration
  loadBalancer:
    # Load balancer mode can be either 'bundled' or 'manual'.
    # In 'bundled' mode a load balancer will be installed on load balancer nodes during cluster creation.
    # In 'manual' mode the cluster relies on a manually-configured external load balancer.
    mode: bundled
    # Load balancer port configuration
    ports:
      # Specifies the port the LB serves the kubernetes control plane on.
      # In 'manual' mode the external load balancer must be listening on this port.
      controlPlaneLBPort: 443
    # There are two load balancer VIPs: one for the control plane and one for the L7 Ingress
    # service. The VIPs must be in the same subnet as the load balancer nodes.
    vips:
      # ControlPlaneVIP specifies the VIP to connect to the Kubernetes API server.
      # This address must not be in the address pools below.
      controlPlaneVIP: 10.200.0.71
      # IngressVIP specifies the VIP shared by all services for ingress traffic.
      # Allowed only in non-admin clusters.
      # This address must be in the address pools below.
      ingressVIP: 10.200.0.72
    # AddressPools is a list of non-overlapping IP ranges for the data plane load balancer.
    # All addresses must be in the same subnet as the load balancer nodes.
    # Address pool configuration is only valid for 'bundled' LB mode in non-admin clusters.
    addressPools:
    - name: pool1
      addresses:
      # Each address must be either in the CIDR form (1.2.3.0/24)
      # or range form (1.2.3.1-1.2.3.5).
      - 10.200.0.72-10.200.0.90
    # A load balancer nodepool can be configured to specify nodes used for load balancing.
    # These nodes are part of the kubernetes cluster and run regular workloads as well as load balancers.
    # If the node pool config is absent then the control plane nodes are used.
    # Node pool configuration is only valid for 'bundled' LB mode.
    # nodePoolSpec:
    #  nodes:
    #  - address: <Machine 1 IP>
  # Proxy configuration
  # proxy:
  #   url: http://[username:password@]domain
  #   # A list of IPs, hostnames or domains that should not be proxied.
  #   noProxy:
  #   - 127.0.0.1
  #   - localhost
  # Logging and Monitoring
  clusterOperations:
    # Cloud project for logs and metrics.
    projectID: <Google Project ID>$GOOGLE_PROJECT_ID
    # Cloud location for logs and metrics.
    location: us-central1
    # Whether collection of application logs/metrics should be enabled (in addition to
    # collection of system logs/metrics which correspond to system components such as
    # Kubernetes control plane or cluster management agents).
    # enableApplication: false
  # Storage configuration
  storage:
    # lvpNodeMounts specifies the config for local PersistentVolumes backed by mounted disks.
    # These disks need to be formatted and mounted by the user, which can be done before or after
    # cluster creation.
    lvpNodeMounts:
      # path specifies the host machine path where mounted disks will be discovered and a local PV
      # will be created for each mount.
      path: /mnt/localpv-disk
      # storageClassName specifies the StorageClass that PVs will be created with. The StorageClass
      # is created during cluster creation.
      storageClassName: local-disks
    # lvpShare specifies the config for local PersistentVolumes backed by subdirectories in a shared filesystem.
    # These subdirectories are automatically created during cluster creation.
    lvpShare:
      # path specifies the host machine path where subdirectories will be created on each host. A local PV
      # will be created for each subdirectory.
      path: /mnt/localpv-share
      # storageClassName specifies the StorageClass that PVs will be created with. The StorageClass
      # is created during cluster creation.
      storageClassName: local-shared
      # numPVUnderSharedPath specifies the number of subdirectories to create under path.
      numPVUnderSharedPath: 5
  # Authentication; uncomment this section if you wish to enable authentication to the cluster with OpenID Connect.
  # authentication:
  #   oidc:
  #     # issuerURL specifies the URL of your OpenID provider, such as "https://accounts.google.com". The Kubernetes API
  #     # server uses this URL to discover public keys for verifying tokens. Must use HTTPS.
  #     issuerURL: <URL for OIDC Provider; required>
  #     # clientID specifies the ID for the client application that makes authentication requests to the OpenID
  #     # provider.
  #     clientID: <ID for OIDC client application; required>
  #     # clientSecret specifies the secret for the client application.
  #     clientSecret: <Secret for OIDC client application; optional>
  #     # kubectlRedirectURL specifies the redirect URL (required) for the gcloud CLI, such as
  #     # "http://localhost:[PORT]/callback".
  #     kubectlRedirectURL: <Redirect URL for the gcloud CLI; optional default is             "http://kubectl.redirect.invalid"
  #     # username specifies the JWT claim to use as the username. The default is "sub", which is expected to be a
  #     # unique identifier of the end user.
  #     username: <JWT claim to use as the username; optional, default is "sub">
  #     # usernamePrefix specifies the prefix prepended to username claims to prevent clashes with existing names.
  #     usernamePrefix: <Prefix prepended to username claims; optional>
  #     # group specifies the JWT claim that the provider will use to return your security groups.
  #     group: <JWT claim to use as the group name; optional>
  #     # groupPrefix specifies the prefix prepended to group claims to prevent clashes with existing names.
  #     groupPrefix: <Prefix prepended to group claims; optional>
  #     # scopes specifies additional scopes to send to the OpenID provider as a comma-delimited list.
  #     scopes: Additional scopes to send to OIDC provider as a comma-separated list; optional>
  #     # extraParams specifies additional key-value parameters to send to the OpenID provider as a comma-delimited
  #     # list.
  #     extraParams: Additional key-value parameters to send to OIDC provider as a comma-separated list; optional>
  #     # certificateAuthorityData specifies a Base64 PEM-encoded certificate authority certificate of your identity
  #     # provider. It's not needed if your identity provider's certificate was issued by a well-known public CA.
  #     certificateAuthorityData: Base64 PEM-encoded certificate authority certificate of your OIDC provider; optional>
  # Node access configuration; uncomment this section if you wish to use a non-root user
  # with passwordless sudo capability for machine login.
  # nodeAccess:
  #   loginUser: login user name
---
# Node pools for worker nodes
apiVersion: baremetal.cluster.gke.io/v1
kind: NodePool
metadata:
  name: node-pool-1
  namespace: cluster-standalone1
spec:
  clusterName: standalone1
  nodes:
  - address: 10.200.0.5
  - address: 10.200.0.6