연결 해제 모드에서 실행되는 Anthos 샘플 구성 파일 참조

다음 샘플 구성 파일을 연결 해제 모드에서 실행되는 Anthos와 함께 사용하여 클러스터를 관리하거나 Anthos 기능을 구성하는 제품 API를 이해할 수 있습니다.

관리자 클러스터 및 NodePool

다음은 연결 해제 모드에서 실행되는 Anthos 관리자 클러스터 샘플 구성 파일입니다.

샘플에 대한 참고 사항:

  • 파일 시작 부분의 actl 구성 변수는 유효한 YAML이지만 유효한 Kubernetes 리소스가 아니며 actl을 사용하여 초기 관리자 클러스터를 만드는 경우에만 포함할 수 있습니다.
  • 클러스터 이름은 admin으로 지정해야 합니다.
  • spec.type 필드는 admin여야 합니다.
  • 구성 매개변수는 셸 확장을 지원하지 않습니다. 절대 경로를 지정해야 합니다.
# actl configuration variables. Because this section is valid YAML but not a
# valid Kubernetes resource, this section can only be included when using actl
# to create the initial admin cluster. Afterwards, when creating user clusters
# by directly applying the cluster and node pool resources to the existing admin
# cluster, you must remove this section.
#
sshPrivateKeyPath: <path to SSH private key, used for node access>
registryMirrors:
# Registry endpoint to pull images from. If the registry has a namespace append
# 'v2' after the registry ip or hostname.
# Example: https://registry.example.com/v2/library
- endpoint: <private registry>
  # Example: /home/USER/.docker/config.json
  pullCredentialConfigPath: <private registry config file>
  # Not needed for trusted domain.
  # Example: /etc/docker/certs.d/registry.example.com/ca.crt
  caCertPath: <private registry TLS cert>
---
apiVersion: v1
kind: Namespace
metadata:
  name: cluster-admin
---
apiVersion: baremetal.cluster.gke.io/v1
kind: Cluster
metadata:
  name: admin
  namespace: cluster-admin
  annotations:
    baremetal.cluster.gke.io/private-mode: "true"
spec:
  # Cluster type. This can only be admin for an admin cluster.
  type: admin
  # Anthos cluster version.
  anthosBareMetalVersion: 1.12.0
  # NodeConfig specifies the configuration that applies to all nodes in the cluster.
  nodeConfig:
    containerRuntime: containerd
  # Control plane configuration
  controlPlane:
    nodePoolSpec:
      nodes:
      # Control plane node pools. Typically, this is either a single machine
      # or 3 machines if using a high availability deployment.
      - address: <Machine 1 IP>
  # Cluster networking configuration
  clusterNetwork:
    # Pods specify the IP ranges from which Pod networks are allocated.
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    # Services specify the network ranges from which service VIPs are allocated.
    # This can be any RFC 1918 range that does not conflict with any other IP range
    # in the cluster and node pool resources.
    services:
      cidrBlocks:
      - 10.96.0.0/12
  # Load balancer configuration
  loadBalancer:
    # Load balancer mode can only be 'bundled'.
    # In 'bundled' mode a load balancer will be installed on load balancer nodes during cluster creation.
    mode: bundled
    # Load balancer port configuration
    ports:
      # Specifies the port the LB serves the kubernetes control plane on.
      # In 'manual' mode the external load balancer must be listening on this port.
      controlPlaneLBPort: 443
    # The VIPs must be in the same subnet as the load balancer nodes.
    vips:
      # ControlPlaneVIP specifies the VIP to connect to the Kubernetes API server.
      # This address must not be in the address pools below.
      controlPlaneVIP: <control plane VIP>
    # AddressPools is a list of non-overlapping IP ranges for the data plane load balancer.
    # All addresses must be in the same subnet as the load balancer nodes.
    # Address pool configuration is only valid for 'bundled' LB mode in non-admin clusters.
    addressPools:
    - name: pool1
      addresses:
      # Each address must be either in the CIDR form (1.2.3.0/24)
      # or range form (1.2.3.1-1.2.3.5).
      - <VIP address pools>
    # A load balancer nodepool can be configured to specify nodes used for load balancing.
    # These nodes are part of the kubernetes cluster and run regular workloads as well as load balancers.
    # If the node pool config is absent then the control plane nodes are used.
    # It's recommended to have the LB node pool for non-admin clusters.
    # Node pool configuration is only valid for 'bundled' LB mode.
    # nodePoolSpec:
    #  nodes:
    #   - address: <Machine 1 IP>
  # Proxy configuration
  # proxy:
  #   url: http://[username:password@]domain
  #   # A list of IPs, hostnames or domains that should not be proxied.
  #   noProxy:
  #   - 127.0.0.1
  #   - localhost
  # Storage configuration
  storage:
    # lvpNodeMounts specifies the config for local PersistentVolumes backed by mounted disks.
    # These disks need to be formatted and mounted by the user, which can be done before or after
    # cluster creation.
    lvpNodeMounts:
      # path specifies the host machine path where mounted disks will be discovered and a local PV
      # will be created for each mount.
      path: /mnt/localpv-disk
      # storageClassName specifies the StorageClass that PVs will be created with. The StorageClass
      # is created during cluster creation.
      storageClassName: local-disks
    # lvpShare specifies the config for local PersistentVolumes backed by subdirectories in a shared filesystem.
    # These subdirectories are automatically created during cluster creation.
    lvpShare:
      # path specifies the host machine path where subdirectories will be created on each host. A local PV
      # will be created for each subdirectory.
      path: /mnt/localpv-share
      # storageClassName specifies the StorageClass that PVs will be created with. The StorageClass
      # is created during cluster creation.
      storageClassName: local-shared
      # numPVUnderSharedPath specifies the number of subdirectories to create under path.
      numPVUnderSharedPath: 5
  # Node access configuration; to use a non-root user with passwordless sudo capability for machine login.
  nodeAccess:
    loginUser: <login user name>
---
# Node pools for worker nodes
apiVersion: baremetal.cluster.gke.io/v1
kind: NodePool
metadata:
  name: node-pool-1
  namespace: cluster-admin
spec:
  clusterName: admin
  nodes:
  - address: <Machine 2 IP>
  - address: <Machine 3 IP>

사용자 클러스터 및 NodePool

다음은 샘플 Anthos 비공개 모드 사용자 클러스터 구성 파일입니다.

https://cloud.google.com/anthos/clusters/docs/bare-metal에서 Anthos on bare metal 설명서를 자세히 살펴보세요.

참고:

  • 이는 관리자 클러스터와 매우 유사하지만 몇 가지 다른 기본값을 사용합니다.
  • 클러스터 및 노드 풀 리소스를 관리자 클러스터에 직접 적용할 수 있습니다.
apiVersion: v1
kind: Namespace
metadata:
  name: cluster-<cluster-name>
---
apiVersion: baremetal.cluster.gke.io/v1
kind: Cluster
metadata:
  name: <cluster-name>
  namespace: cluster-<cluster-name>
  annotations:
    baremetal.cluster.gke.io/private-mode: "true"
spec:
  # Cluster type. This can only be user for a user cluster.
  type: user
  # Anthos cluster version.
  anthosBareMetalVersion: 1.12.0
  # NodeConfig specifies the configuration that applies to all nodes in the cluster.
  nodeConfig:
    containerRuntime: containerd
  # Control plane configuration
  controlPlane:
    nodePoolSpec:
      nodes:
      # Control plane node pools. Typically, this is either a single machine
      # or 3 machines if using a high availability deployment.
      - address: <Machine 4 IP>
  # Cluster networking configuration
  clusterNetwork:
    # Pods specify the IP ranges from which Pod networks are allocated.
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    # Services specify the network ranges from which service VIPs are allocated.
    # This can be any RFC 1918 range that does not conflict with any other IP range
    # in the cluster and node pool resources.
    services:
      cidrBlocks:
      - 10.96.0.0/12
  # Credentials specify the secrets that hold SSH key and image pull credential for the new cluster.
  # credentials:
  #  # Optionally override default ssh key secret inherited from the admin cluster.
  #  sshKeySecret:
  #    name: SSH_KEY_SECRET
  #    namespace: cluster-<cluster-name>
  #  # Optionally override default image pull secret inherited from the admin cluster.
  #  imagePullSecret:
  #    name: IMAGE_PULL_SECRET
  #    namespace: cluster-<cluster-name>
  # Load balancer configuration
  loadBalancer:
    # Load balancer mode can only be 'bundled'.
    mode: bundled
    # Load balancer port configuration
    ports:
      # Specifies the port the LB serves the kubernetes control plane on.
      # In 'manual' mode the external load balancer must be listening on this port.
      controlPlaneLBPort: 443
    # The VIPs must be in the same subnet as the load balancer nodes.
    vips:
      # ControlPlaneVIP specifies the VIP to connect to the Kubernetes API server.
      # This address must not be in the address pools below.
      controlPlaneVIP: <control plane VIP>
    # AddressPools is a list of non-overlapping IP ranges for the data plane load balancer.
    # All addresses must be in the same subnet as the load balancer nodes.
    # Address pool configuration is only valid for 'bundled' LB mode in non-admin clusters.
    addressPools:
    - name: pool1
      addresses:
      # Each address must be either in the CIDR form (1.2.3.0/24)
      # or range form (1.2.3.1-1.2.3.5).
      - <VIP address pools>
    # A load balancer nodepool can be configured to specify nodes used for load balancing.
    # These nodes are part of the kubernetes cluster and run regular workloads as well as load balancers.
    # If the node pool config is absent then the control plane nodes are used.
    # Node pool configuration is only valid for 'bundled' LB mode.
    # nodePoolSpec:
    #  nodes:
    #  - address: <Machine 7 IP>
  # Proxy configuration
  # proxy:
  #   url: http://[username:password@]domain
  #   # A list of IPs, hostnames or domains that should not be proxied.
  #   noProxy:
  #   - 127.0.0.1
  #   - localhost
  # Storage configuration
  storage:
    # lvpNodeMounts specifies the config for local PersistentVolumes backed by mounted disks.
    # These disks need to be formatted and mounted by the user, which can be done before or after
    # cluster creation.
    lvpNodeMounts:
      # path specifies the host machine path where mounted disks will be discovered and a local PV
      # will be created for each mount.
      path: /mnt/localpv-disk
      # storageClassName specifies the StorageClass that PVs will be created with. The StorageClass
      # is created during cluster creation.
      storageClassName: local-disks
    # lvpShare specifies the config for local PersistentVolumes backed by subdirectories in a shared filesystem.
    # These subdirectories are automatically created during cluster creation.
    lvpShare:
      # path specifies the host machine path where subdirectories will be created on each host. A local PV
      # will be created for each subdirectory.
      path: /mnt/localpv-share
      # storageClassName specifies the StorageClass that PVs will be created with. The StorageClass
      # is created during cluster creation.
      storageClassName: local-shared
      # numPVUnderSharedPath specifies the number of subdirectories to create under path.
      numPVUnderSharedPath: 5
  # Node access configuration; to use a non-root user with passwordless sudo capability for machine login.
  nodeAccess:
    loginUser: <login user name>
---
# Node pools for worker nodes
apiVersion: baremetal.cluster.gke.io/v1
kind: NodePool
metadata:
  name: <cluster-name>-worker-node-pool
  namespace: cluster-<cluster-name>
spec:
  clusterName: <cluster-name>
  nodes:
  - address: <Machine 5 IP>
  - address: <Machine 6 IP>

관리 운영자

다음은 연결 해제 모드에서 실행되는 Anthos 관리 운영자 샘플 구성 파일입니다. 이 구성 파일은 관리 센터를 제어합니다.

apiVersion: managementcenter.anthos.cloud.google.com/v1
kind: AdminOperator
metadata:
  name: admin-operator
spec:
  billingInfo:
    projectNumber: <your Google Cloud Platform project number>
    # FreeTrialExpiration indicates if the project has a free trial and the time
    # when that free trial ends. Format: date-time in RFC 3339.
    # It's not a free trial by default when not specified.
    # freeTrialExpiration: <2021-07-01T00:00:00Z>
  # UpdateConfigOverride can be optionally provided to override the default
  # update configuration for components.
  # All the components will be running on the same version as the admin operator
  # by default, unless an override is set via this field.
  updateConfigOverride:
    policies:
    - name: "<component name, for example: anthos-config-management>"
      versionConstraint: "<=1.9.0"

InventoryMachine

다음은 연결 해제 모드에서 실행되는 Anthos 샘플 InventoryMachine 구성 파일입니다. 이 파일은 관리자 클러스터에 적용되며 사용자 클러스터 생성에 필요한 머신을 제공합니다.

apiVersion: baremetal.cluster.gke.io/v1alpha1
kind: InventoryMachine
metadata:
  name: <Machine IP address>
  # Optional: used by the Management Center to inform customers
  labels:
    key1: value1
    key2: value2
spec:
  # Address specifies the default IPv4 address for SSH access and Kubernetes node.
  # Routable from the admin cluster.
  # Example: 192.168.0.1
  # This field is immutable.
  # This field is required.
  address: <Machine IP address>

AddressPool

다음은 연결 해제 모드에서 실행되는 Anthos 샘플 AddressPool 구성 파일입니다. 이 파일은 관리자 클러스터에 적용되며 사용자 클러스터 생성에 필요한 가상 IP 주소를 제공합니다.

apiVersion: managementcenter.anthos.cloud.google.com/v1
kind: AddressPool
metadata:
  # Don't change the name, only `anthos-addresspool-default` allowed.
  name: anthos-addresspool-default
spec:
  description: <description text>
  addresses:
  # All addresses below are a list of non-overlapping IP ranges.
  # Address Range, must be in the single IP address form (1.2.3.4),
  # CIDR form (1.2.3.0/24) or range form (1.2.3.1-1.2.3.5).
  - <VIP address range>
  - <VIP address>

BootstrapService

다음은 연결 해제 모드에서 실행되는 Anthos 샘플 BootstrapService 구성 파일입니다. 이 파일은 관리자 클러스터에 적용되며 사용자 클러스터 생성에 필요한 부트스트랩 서비스(예: 타사 스토리지 제공업체 또는 GPU 드라이버)를 제공합니다.

configmap은 kubectl create configmap <name of configmap> --from-file=<name of manifest>.yaml을 통해 만들 수 있습니다.

apiVersion: managementcenter.anthos.cloud.google.com/v1
kind: BootstrapService
metadata:
  name: <name of the bootstrap service>
  namespace: anthos-management-center
spec:
  # If set to True, this configuration can be applied to many user clusters,
  # e.g. a GPU driver configuration. If False, this configuration can only be
  # applied to a single user cluster, e.g. a CSI Driver + StorageClass
  # combination which is intended for exclusive use by a single user cluster.
  # Defaults to False.
  isReusable: False
  configMapRef:
    name: <name of configmap>
    namespace: anthos-management-center

BootstrapServiceBinding

다음은 연결 해제 모드에서 실행되는 Anthos 샘플 BootstrapServiceBinding 구성 파일입니다. 이 파일은 관리자 클러스터에 적용되며 클러스터 생성 시 대상 클러스터에 BootstrapService를 결합합니다.

apiVersion: managementcenter.anthos.cloud.google.com/v1
kind: BootstrapServiceBinding
metadata:
  name: <name of the bootstrap service binding>
  namespace: anthos-management-center
spec:
  configs:
  - configRef:
      name: <name of the bootstrap service>
      namespace: anthos-management-center
    placement:
      clusterIDs:
      - "<cluster-name>"

ConfigManagementFeatureSpec

다음은 연결 해제 모드에서 실행되는 Anthos 샘플 ConfigManagementFeatureSpec 구성 파일입니다. 이 파일은 관리자 클러스터에 적용되며 Anthos Config Management의 사양 정의를 제공합니다.

https://cloud.google.com/anthos/config-management에서 더 많은 Anthos Config Management 설명서를 참조하세요.

apiVersion: managementcenter.anthos.cloud.google.com/v1
kind: ConfigManagementFeatureSpec
metadata:
  name: <name of config management spec>
  namespace: anthos-management-center
spec:
  version: "1.7.1"
  git:
    syncRepo: "git@<YOUR_GIT_REPO>.git"
    policyDir: "."
    secretType: "ssh"
    syncBranch: "master"
    syncRev: "HEAD"
    syncWait: 15

  # See https://cloud.google.com/kubernetes-engine/docs/add-on/config-sync/how-to/unstructured-repo
  # for the difference between `hierarchy` and `unstructured` source format.
  sourceFormat: unstructured

  # See https://cloud.google.com/anthos-config-management/docs/concepts/policy-controller
  # for more about Policy Controller.
  policyController:
    enabled: true

  # See https://cloud.google.com/kubernetes-engine/docs/add-on/config-sync/concepts/hierarchy-controller
  # for more background regarding Hierarchy Controller.
  hierarchyController:
    enabled: true

  # [Optional] The Secret on the admin cluster to access the config-management repo.
  # If set, the secret referenced will be copied to user clusters to allow ACM to access the Git repo.
  # If not set, users will need to create the Git credential secret on the user cluster by themselves.
  secretRef:
    name: git-creds
    namespace: anthos-management-center

ConfigManagementBinding

다음은 연결 해제 모드에서 실행되는 Anthos 샘플 ConfigManagementBinding 구성 파일입니다. 이 파일은 관리자 클러스터에 적용되며 사용자 클러스터에 Anthos Config Management를 설치합니다.

apiVersion: managementcenter.anthos.cloud.google.com/v1
kind: ConfigManagementBinding
metadata:
  name: <name of config management binding>
  namespace: anthos-management-center
spec:
  configs:
  - configRef:
      name: <name of config management spec>
      namespace: anthos-management-center
    placement:
      clusterIDs:
      - "<cluster-name>"

ServiceMeshFeatureSpec

다음은 연결 해제 모드에서 실행되는 Anthos 샘플 ServiceMeshFeatureSpec 구성 파일입니다. 이 파일은 관리자 클러스터에 적용되며 Anthos Service Mesh의 사양 정의를 제공합니다.

https://cloud.google.com/anthos/service-mesh에서 더 많은 Anthos Service Mesh 설명서를 참조하세요.

apiVersion: managementcenter.anthos.cloud.google.com/v1alpha1
kind: ServiceMeshFeatureSpec
metadata:
  name: <name of service mesh spec>
  namespace: anthos-management-center
spec:
  version: 1.9.6-asm.1

ServiceMeshBinding

다음은 연결 해제 모드에서 실행되는 Anthos 샘플 ServiceMeshBinding 구성 파일입니다. 이 파일은 관리자 클러스터에 적용되며 사용자 클러스터에 Anthos Service Mesh를 설치합니다.

apiVersion: managementcenter.anthos.cloud.google.com/v1alpha1
kind: ServiceMeshBinding
metadata:
  name: <name of service mesh binding>
  namespace: anthos-management-center
spec:
  configs:
  - configRef:
      name: <name of service mesh spec>
      namespace: anthos-management-center
    placement:
      clusterIDs:
      - "<cluster-name>"

Anthos Identity Service

다음은 연결 해제 모드에서 실행되는 Anthos 샘플 'ClientConfig' 구성 파일입니다.

이 파일은 관리자 클러스터에 적용되며 클라이언트 식별을 제공합니다.

apiVersion: authentication.gke.io/v2alpha1
kind: ClientConfig
spec:
  authentication:
  - name: https://accounts.google.com
    oidc:
      clientID: <redacted>
      clientSecret: <redacted>
      cloudConsoleRedirectURI: http://cloud.console.not.enabled
      extraParams: prompt=consent,access_type=offline
      issuerURI: https://accounts.google.com
      kubectlRedirectURI: http://localhost:9879/callback
      scopes: email
      userClaim: email
  certificateAuthorityData: <DO NOT CHANGE>
  name: <DO NOT CHANGE>
  server: <DO NOT CHANGE>

DomainConfig

다음은 연결 해제 모드에서 실행되는 Anthos 샘플 DomainConfig 구성 파일입니다. 이 파일은 관리자 클러스터에 적용되며 도메인 이름에 사용해야 하는 연결 해제 모드에서 실행되는 Anthos 웹 엔드포인트에 대한 HTTPS 연결을 보호하는 데 사용되는 인증 메서드 이름과 인증서를 구성하는 데 사용됩니다. 이 구성을 설정하면 로그인 리디렉터가 요청에 사용된 도메인 이름을 기반으로 인증되지 않은 요청을 해당 로그인 페이지로 자동으로 리디렉션할 수 있습니다.

apiVersion: managementcenter.anthos.cloud.google.com/v1
kind: DomainConfig
metadata:
  # name is the domain name used to serve the Anthos web endpoints.
  # This should be a valid fully qualified domain name.
  # It should not include the protocol such as http or https.
  # Example of incorrect domain names: http://anthos, anthos, anthos*.com
  # Example of correct domain names: anthos.example.com
  name: <name of the domain>
spec:
  # authMethodName is the name of the authentication configured
  # in the Anthos Identity Service's ClientConfig that should be used for
  # this domain name.
  authMethodName: <name in ClientConfig.Spec.Authentication.Name>

  # If not specified, a self-signed certificate (untrusted) will be used.
  # To configure the TLS certificate, copy the certificate in a secret in
  # istio-system namespace and reference the name of the secret in certSecretName.
  # The referred secret must be of the type "kubernetes.io/tls".
  # The referred secret must in istio-system namespace.
  certSecretName: <cert secret name>

추가 구성을 위한 Logmon 및 ConfigMap

연결 해제 모드에서 실행되는 Anthos에서 클러스터의 모니터링 및 로그인을 관리하기 위해 사용되는 Logmon 구성 파일 샘플은 다음과 같습니다.

샘플에 대한 참고 사항:

  • Logmon 리소스의 namelogmon-default여야 합니다.
  • Logmon 리소스의 namespacekube-system이어야 합니다.
  • fluentbitConfigmaps에 나열된 ConfigMap의 구성 구문은 fluent 비트 출력 플러그인을 따라야 합니다.
  • alertmanagerConfigurationConfigmaps에 나열된 ConfigMap의 구성 구문은 alertmanager 구성을 따라야 합니다.
  • prometheusRulesConfigmaps에 나열된 ConfigMap의 구성 구문은 prometheus 기록 규칙prometheus 알림 규칙을 따라야 합니다.
apiVersion: addons.gke.io/v1alpha1
kind: Logmon
metadata:
  # Don't change the name
  name: logmon-default
  # Don't change the namespace
  namespace: kube-system
spec:
  system_logs:
    outputs:
      additionalOutput:
        fluentbitConfigmaps:
        # Same syntax as fluent-bit output plugins, see 'Sample fluentbitConfigmaps' below as example
        - "<customized-system-logs-fluent-bit-output-config>"
        # Scheme: []v1.VolumeMount
        volumeMounts:
        - ...
        - ...
        # Scheme: []v1.Volume
        volumes:
        - ...
        - ...
      default_loki:
        deployment:
          components:
            loki:
              storageSize: 20Gi # "<storage-size>"
          retentionPolicy:
            retentionTime: 720h # "<retention-time>"
          storageClassName: anthos-system # "<storage-class-name>"
  system_metrics:
    outputs:
      default_prometheus:
        deployment:
          components:
            alertmanager:
              alertmanagerConfigurationConfigmaps:
              # Same syntax as alertmanager configuration, see 'Sample alertmanagerConfigurationConfigmaps' below as example
              - "<customized-alertmanager-configmap-name>"
              storageSize: 1Gi # "<storage-size>"
            grafana:
              storageSize: 1Gi # "<storage-size>"
            prometheus:
              prometheusRulesConfigmaps:
              # Same syntax as prometheus recording rules and prometheus alerting rules, see 'Sample prometheusRulesConfigmaps' below as example
              - "<customized-prometheus-rules-configmap-name>"
              storageSize: 20Gi # "<storage-size>"
          retentionPolicy:
            retentionTime: 720h # "<retention-time>"
          storageClassName: anthos-system # "<storage-class-name>"

fluentbitConfigmaps 샘플

샘플에 대한 참고 사항:

  • namespacekube-system이어야 합니다.
  • logmon 라벨은 필수 항목입니다.
  • ConfigMap의 키는 output.conf여야 합니다.
apiVersion: v1
kind: ConfigMap
metadata:
  name: <customized-system-logs-fluent-bit-output-config>
  # Don't change the namespace
  namespace: kube-system
  labels:
    # This label is required.
    logmon: system_logs
data:
  # The file name must be output.conf
  output.conf: |
    # Please fill customized fluent-bit output plugin configuration below
    [OUTPUT]
        Name: stdout
        Match: *

alertmanagerConfigurationConfigmaps 샘플

샘플에 대한 참고 사항:

  • namespacekube-system이어야 합니다.
  • logmon 라벨은 필수 항목입니다.
  • ConfigMap의 키는 alertmanager.yml이어야 합니다.
apiVersion: v1
kind: ConfigMap
metadata:
  name: <customized-alertmanager-configmap-name>
  # Don't change the namespace
  namespace: kube-system
  labels:
    # This label is required.
    logmon: system_metrics
data:
  # The file name must be alertmanager.yml
  alertmanager.yml: |
    # Please fill customized alertmanager configuration below
    global:
      # Also possible to place this URL in a file.
      # Ex: `slack_api_url_file: '/etc/alertmanager/slack_url'`
      slack_api_url: '<slack_webhook_url>'

    route:
      receiver: 'slack-notifications'
      group_by: [alertname, datacenter, app]

    receivers:
    - name: 'slack-notifications'
    slack_configs:
    - channel: '#alerts'
        text: 'https://internal.myorg.net/wiki/alerts/'

prometheusRulesConfigmaps 샘플

샘플에 대한 참고 사항:

  • namespacekube-system이어야 합니다.
  • logmon 라벨은 필수 항목입니다.
  • 여러 ConfigMap이 Logmon 리소스의 prometheusRulesConfigmaps에 나열된 경우 키는 모든 ConfigMap에서 고유해야 합니다.
apiVersion: v1
kind: ConfigMap
metadata:
  name: <customized-prometheus-rules-configmap-name>
  # Don't change the namespace
  namespace: kube-system
  labels:
    # This label is required.
    logmon: system_metrics
data:
  # The file name must be unique across all customized prometheus rule files.
  <a-unique-file-name>: |
   # Please fill customized recording rules below
   groups:
    - name: kubernetes-apiserver
      rules:
      - alert: KubeAPIDown
        annotations:
          message: KubeAPI has disappeared from Prometheus target discovery.
          runbook_url: https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubeapidown
        expr: |
          absent(up{job="kube-apiserver"} == 1)
        for: 15m
        labels:
          severity: critical

  # The file name must be unique across all customized prometheus rule files.
  <a-unique-file-name>: |
    # Please fill customized alerting rules below
   groups:
    - name: node.rules
      rules:
      - expr: |
          topk by(cluster, namespace, pod) (1,
            max by (cluster, node, namespace, pod) (
              label_replace(kube_pod_info{job="kube-state-metrics",node!=""}, "pod", "$1", "pod", "(.*)")
          ))
        record: 'node_namespace_pod:kube_pod_info:'