This page shows how to create alerting policies for Google Distributed Cloud clusters.
Before you begin
You must have the following permissions to create alerting policies:
monitoring.alertPolicies.create
monitoring.alertPolicies.delete
monitoring.alertPolicies.update
You have these permissions if you have any one of the following roles:
monitoring.alertPolicyEditor
monitoring.editor
- Project Editor
- Project Owner
To check your roles, go to the IAM page in the Google Cloud console.
Creating a policy: Anthos on baremetal cluster API server unavailable
In this exercise, you create an alerting policy for Kubernetes API servers of clusters. With this policy in place, you can arrange to be notified whenever the API server of a cluster is unavailable.
Download the policy configuration file: apiserver-unavailable.json
Create the policy:
gcloud alpha monitoring policies create --policy-from-file=POLICY_CONFIG
Replace POLICY_CONFIG with the path of the configuration file you just downloaded.
View your alerting policies:
Console
In the Google Cloud console, go to the Monitoring page.
On the left, select Alerting.
Under Policies, you can see a list of your alerting policies.
In the list, select Anthos on baremetal cluster API server unavailable (critical) to see details about your new policy. Under Conditions, you can see a description of the policy. For example:
Policy violates when ANY condition is met Anthos on baremetal cluster API server uptime is absent Anthos on baremetal cluster API server uptime is less than 99.99% per minute
gcloud
gcloud alpha monitoring policies list
The output shows detailed information about the policy. For example:
combiner: OR conditions: - conditionAbsent: aggregations: - alignmentPeriod: 60s crossSeriesReducer: REDUCE_MEAN groupByFields: - resource.label.project_id - resource.label.location - resource.label.cluster_name - resource.label.namespace_name - resource.label.container_name - resource.label.pod_name perSeriesAligner: ALIGN_MAX duration: 300s filter: resource.type = "k8s_container" AND resource.labels.namespace_name = "kube-system" AND metric.type = "kubernetes.io/anthos/container/uptime" AND resource.label."container_name"=monitoring.regex.full_match("kube-apiserver") trigger: count: 1 displayName: Anthos on baremetal cluster API server uptime is absent name: projects/…/alertPolicies/12404845535868002666/conditions/12404845535868003603 - conditionThreshold: aggregations: - alignmentPeriod: 120s crossSeriesReducer: REDUCE_MEAN groupByFields: - resource.label.project_id - resource.label.location - resource.label.cluster_name - resource.label.namespace_name - resource.label.container_name - resource.label.pod_name perSeriesAligner: ALIGN_MAX comparison: COMPARISON_LT duration: 300s filter: resource.type = "k8s_container" AND resource.labels.namespace_name = "kube-system" AND metric.type = "kubernetes.io/anthos/container/uptime" AND resource.label."container_name"=monitoring.regex.full_match("kube-apiserver") thresholdValue: 119.0 trigger: count: 1 displayName: Anthos on baremetal cluster API server uptime is less than 99.99% per minute name: projects/…/alertPolicies/12404845535868002666/conditions/12404845535868004540 creationRecord: mutateTime: … mutatedBy: … displayName: Anthos on baremetal cluster API server unavailable (critical) enabled: true mutationRecord: mutateTime: … mutatedBy: … name: projects/…/alertPolicies/12404845535868002666
Creating additional alerting policies
This section provides descriptions and configuration files for a set of recommended alerting policies.
To create a policy, follow the same steps that you used in the preceding exercise:
To download the configuration file, click the link in the right column.
To create the policy, run
gcloud alpha monitoring policies create
.
You can download and install all of the alert policy ssamples described in this document with the following script:
# 1. Create a directory named alert_samples:
mkdir alert_samples && cd alert_samples
declare -a alerts=("apiserver-unavailable.json" "scheduler-unavailable.json" "controller-manager-unavailable.json" "pod-crash-looping.json" "container-memory-usage-high-reaching-limit.json"
"container-cpu-usage-high-reaching-limit.json" "pod-not-ready-1h.json" "persistent-volume-usage-high.json" "node-not-ready-1h.json" "node-cpu-usage-high.json" "node-memory-usage-high.json"
"node-disk-usage-high.json" "api-server-error-ratio-10-percent.json" "api-server-error-ratio-5-percent.json" "etcd-leader-changes-too-frequent.json" "etcd-proposals-failed-too-frequent.json"
"etcd-server-not-in-quorum.json" "etcd-storage-usage-high.json")
# 2. Download all alert samples into the alert_samples/ directory:
for x in "${alerts[@]}"
do
wget https://cloud.google.com/anthos/clusters/docs/bare-metal/1.12/samples/${x}
done
# 3. (optional) Uncomment and provide your project ID to set the default project
# for gcloud commands:
# gcloud config set project <PROJECT_ID>
# 4. Create alert policies for each of the downloaded samples:
for x in "${alerts[@]}"
do
gcloud alpha monitoring policies create --policy-from-file=${x}
done
Control plane components availability
Alert name | Description | Alerting policy definition in Cloud Monitoring |
---|---|---|
Anthos on baremetal cluster API server unavailable (critical) | API server is not up or uptime is less than 99.99% per minute | apiserver-unavailable.json |
Anthos on baremetal cluster scheduler unavailable (critical) | Scheduler is not up or uptime is less than 99.99% per minute | scheduler-unavailable.json |
Anthos on baremetal controller manager unavailable (critical) | Controller manager has disappeared from metrics target discovery | controller-manager-unavailable.json |
Kubernetes system
Alert name | Description | Alerting policy definition in Cloud Monitoring |
---|---|---|
Anthos on baremetal pod crash looping (critical) | Pod restarted and might be in a crash loop status | pod-crash-looping.json |
Anthos on baremetal container memory usage exceeds 85 percent (warning) | Container memory usage is over 85% of limit | container-memory-usage-high-reaching-limit.json |
Anthos on baremetal container cpu usage exceeds 80 percent (warning) | Container cpu usage is over 80% of limit | container-cpu-usage-high-reaching-limit.json |
Anthos on baremetal pod not ready for more than one hour (critical) | Pod is in a non-ready state for more than one hour | pod-not-ready-1h.json |
Anthos on baremetal persistent volume high usage (critical) | Claimed persistent volume is expected to fill up | persistent-volume-usage-high.json |
Anthos on baremetal node not ready for more than one hour (critical) | Node is in a non-ready state for more than one hour | node-not-ready-1h.json |
Anthos on baremetal node cpu usage exceeds 80 percent (critical) | Node cpu usage is over 80% | node-cpu-usage-high.json |
Anthos on baremetal node memory usage exceeds 80 percent (critical) | Node memory usage is over 80% | node-memory-usage-high.json |
Anthos on baremetal node disk usage exceeds 80 percent (critical) | Node disk usage is over 80% | node-disk-usage-high.json |
Kubernetes performance
Alert name | Description | Alerting policy definition in Cloud Monitoring |
---|---|---|
Anthos on baremetal API server error count ratio exceeds 10 percent (critical) | API server is returning errors for more than 10% of requests | api-server-error-ratio-10-percent.json |
Anthos on baremetal API server error count ratio exceeds 5 percent (warning) | API server is returning errors for more than 5% of requests | api-server-error-ratio-5-percent.json |
Anthos on baremetal etcd leader changes too frequently (critical) | The etcd leader changes too frequently |
etcd-leader-changes-too-frequent.json |
Anthos on baremetal etcd proposals failed too frequently (critical) | The etcd proposals are failing too frequently |
etcd-proposals-failed-too-frequent.json |
Anthos on baremetal etcd server is not in quorum (critical) | The etcd server is not in quorum |
etcd-server-not-in-quorum.json |
Anthos on baremetal etcd storage exceeds 90 percent limit (critical) | The etcd storage usage is more than 90% of limit |
etcd-storage-usage-high.json |
Getting notified
After you create an alerting policy, you can define one or more notification channels for the policy. There are several kinds of notification channels. For example, you can be notified by email, a Slack channel, or a mobile app. You can choose the channels that suit your needs.
For instructions about how to configure notification channels, see Managing notification channels.