Creating alerting policies

This page shows how to create alerting policies for GKE on Bare Metal clusters.

Before you begin

You must have the following permissions to create alerting policies:

  • monitoring.alertPolicies.create
  • monitoring.alertPolicies.delete
  • monitoring.alertPolicies.update

You'll have these permissions if you have any one of the following roles:

  • monitoring.alertPolicyEditor
  • monitoring.editor
  • Project editor
  • Project owner

To check your roles, go to the IAM page in the Google Cloud console.

Creating a policy: cluster API server down

In this exercise, you create an alerting policy for Kubernetes API servers of clusters. With this policy in place, you can arrange to be notified whenever the API server of a cluster goes down.

  1. Download the policy configuration file: apiserver-down.json.

  2. Create the policy:

    gcloud alpha monitoring policies create --policy-from-file=POLICY_CONFIG
    

    Replace POLICY_CONFIG with the path of the configuration file you just downloaded.

  3. View your alerting policies:

    Console

    1. In the Google Cloud console, go to the Monitoring page.

      Go to Monitoring

    2. On the left, select Alerting.

    3. Under Policies, you can see a list of your alerting policies.

      In the list, select Anthos on baremetal API server down (critical) to see details about your new policy. Under Conditions, you can see a description of the policy. For example:

      Policy violates when ANY condition is met
      Anthos on baremetal API server is up
      

    gcloud

    gcloud alpha monitoring policies list

    The output shows detailed information about the policy. For example:

    ---
    combiner: OR
    conditions:
    - conditionMonitoringQueryLanguage:
        duration: 0s
        query: |-
          { t_0:
              fetch k8s_container
              | metric 'kubernetes.io/anthos/up'
              | filter (resource.container_name =~ 'kube-apiserver')
              | align mean_aligner()
              | group_by 1m, [value_up_mean: mean(value.up)]
              | every 1m
              | group_by [resource.project_id, resource.location, resource.cluster_name],
                  [value_up_mean_aggregate: aggregate(value_up_mean)]
          ; t_1:
              fetch k8s_container::kubernetes.io/anthos/anthos_cluster_info
              | filter (metric.anthos_distribution = 'baremetal')
              | align mean_aligner()
              | group_by [resource.project_id, resource.location, resource.cluster_name],
                  [value_anthos_cluster_info_aggregate:
                     aggregate(value.anthos_cluster_info)]
              | every 1m }
          | join
          | value [t_0.value_up_mean_aggregate]
          | window 1m
          | absent_for 300s
        trigger:
          count: 1
      displayName: Anthos on baremetal API server is up
      name: projects/xxxxxx/alertPolicies/8497323605386949154/conditions/8497323605386950375
    creationRecord:
      mutateTime: '2021-03-17T23:07:18.618778106Z'
      mutatedBy: sharon@example.com
    displayName: Anthos on baremetal API server down (critical)
    enabled: true
    mutationRecord:
      mutateTime: '2021-03-17T23:07:18.618778106Z'
      mutatedBy: sharon@example.com
    name: projects/xxxxxx/alertPolicies/8497323605386949154
    

Creating additional alerting policies

This section provides descriptions and configuration files for a set of recommended alerting policies.

To create a policy, follow the same steps that you used in the preceding exercise:

  1. Click the link in the right column to download the configuration file.

  2. Run gcloud alpha monitoring policies create to create the policy.

Control plane components availability

Alert name Description Alerting policy definition in Cloud Monitoring
Anthos on baremetal API server down (critical) API server has disappeared from metrics target discovery apiserver-down.json
Anthos on baremetal scheduler down (critical) Scheduler has disappeared from metrics target discovery scheduler-down.json
Anthos on baremetal controller manager down (critical) Controller manager has disappeared from metrics target discovery controller-manager-down.json

Kubernetes system

Alert name Description Alerting policy definition in Cloud Monitoring
Anthos on baremetal pod crash looping (critical) Pod is in a crash loop status pod-crash-looping.json
Anthos on baremetal pod not ready for more than one hour (critical) Pod is in a non-ready state for more than one hour pod-not-ready-1h.json
Anthos on baremetal persistent volume high usage (critical) Claimed persistent volume is expected to fill up persistent-volume-usage-high.json
Anthos on baremetal node not ready for more than one hour (critical) Node is in a non-ready state for more than one hour node-not-ready-1h.json
Anthos on baremetal node cpu usage exceeds 80 percent (critical) Node cpu usage is over 80% node-cpu-usage-high.json
Anthos on baremetal node memory usage exceeds 80 percent (critical) Node memory usage is over 80% node-memory-usage-high.json
Anthos on baremetal node disk usage exceeds 80 percent (critical) Node disk usage is over 80% node-disk-usage-high.json

Kubernetes performance

Alert name Description Alerting policy definition in Cloud Monitoring
Anthos on baremetal API server error count ratio exceeds 10 percent (critical) API server is returning errors for more than 10% of requests api-server-error-ratio-10-percent.json
Anthos on baremetal API server error count ratio exceeds 5 percent (warning) API server is returning errors for more than 5% of requests api-server-error-ratio-5-percent.json
Anthos on baremetal etcd leader change too frequent (critical) The etcd leader changes too frequently etcd-leader-changes-too-frequent.json
Anthos on baremetal etcd proposals failed too frequent (critical) The etcd proposals are failing too frequently etcd-proposals-failed-too-frequent.json
Anthos on baremetal etcd server is not in quorum (critical) The etcd server is not in quorum etcd-server-not-in-quorum.json

Getting notified

After you create an alerting policy, you can define one or more notification channels for the policy. There are several kinds of notification channels. For example, you could be notified by email, a Slack channel, or a mobile app. You can choose the channels that suit your needs.

For instructions about how to configure notification channels, see Managing notification channels.