Alerting behavior

Alerting policies exist in dynamic and complex environments, so using them effectively requires an understanding of some of the variables that can affect their behavior. The metrics and resources monitored by conditions, the duration windows for conditions, and the notification channels can each have an effect.

This page provides some additional information to help you understand the behavior of your alerting policies.

The effect of the duration window

The duration window of a condition is the length of time the condition must evaluate as true before triggering.

In general, the more highly available the service or the bigger the penalty for not detecting issues, the shorter duration window you might want to specify. However, you don't want to set the duration window so short that an incident is created if a single measurement matches a condition. Instead, you usually want several consecutive measurements to meet a condition before you want to have an incident created.

A condition resets its duration window each time a measurement doesn't satisfy the condition.

Example: This policy specifies a five-minute duration window.

If HTTP response latency is higher than two seconds,
and if this condition lasts longer than five minutes,
open an incident and send email to your support team.

The following sequence illustrates how the duration window affects the evaluation of the condition:

  1. For three consecutive minutes, HTTP latency is above two seconds.
  2. In the next measurement, latency falls below two seconds, so the condition resets the duration window.
  3. For the next five consecutive minutes, HTTP latency is above two seconds, so the policy is triggered.

Duration windows should be long enough to reduce false positives, but short enough to ensure that incidents are opened in a timely manner.

Policies with multiple conditions

An alerting policy can contain up to 6 conditions.

If you are using the Cloud Monitoring API or if your alerting policy has multiple conditions, then you must specify when violations of the individual conditions result in an incident being opened:

  • If you are using the Google Cloud Console, you use the Policy triggers field.
  • If you are using the Cloud Monitoring API, then you use the combiner field.

This table lists the settings in the Cloud Console, the equivalent value in the Cloud Monitoring API, and a description of each setting:

Cloud Console
Policy triggers value
Cloud Monitoring API
combiner value
Meaning
Any condition is met
(default value)
OR An incident is opened if any resource violates any of the conditions.
All conditions are met AND An incident is opened if each each condition is violated by at least one resource, even if a different resource violates each condition.
All conditions are met
on matching resources
AND_WITH_MATCHING_RESOURCE An incident is opened if each condition is violated by the same resource. This setting is the most stringent combining choice.

Disabled alerting policies

Alerting policies can be temporarily paused and restarted by disabling and enabling the policy. For example, if you have an alerting policy that notifies you when a process is down for more than 5 minutes, you can disable the alerting policy when you take the process down for upgrade or other maintenance.

Disabling an alerting policy prevents the policy from triggering (or resolving) incidents, but it doesn't stop Cloud Monitoring from evaluating the policy conditions and recording the results.

Suppose the monitored process is down for 20 minutes for maintenance. If you restart the process and re-enable the alerting policy immediately, it recognizes that the process hasn't been up for the last 5 minutes and opens an incident.

When a disabled policy is re-enabled, Monitoring examines the values of all conditions over the most recent duration window, which might include data taken before, during, and after the paused interval. Policies can trigger immediately after resuming them, even with large duration windows.

Anomalous incidents

Alerting policies, particularly those with metric-absence or “less than” threshold conditions, can appear on the Incidents pane of the Alerting window, and appear to have triggered prematurely or incorrectly.

This occurs when there is a gap in the data, but it isn't always easy to identify such gaps. Sometimes the gap is obscured, and sometimes it is corrected.

In charts, for example, gaps in the data are interpolated. Several minutes of data may be missing, but the chart connects missing points for visual contiguity. Such a gap in the underlying data may be enough to trigger an alerting policy.

Points in logs-based metrics can arrive late and be backfilled, for up to 10 minutes in the past. This effectively corrects the gap; the gap is filled in when the data finally arrives. Thus, a gap in a logs-based metric that can no longer be seen could have caused an alerting policy to trigger.

Metric-absence and “less than” threshold conditions are evaluated in real time, with a small query delay. The status of the condition can change between the time it is evaluated and the time the corresponding incident is visible in Monitoring.

Partial metric data

If measurements are missing (for example, if there are no HTTP requests for a couple of minutes), the policy uses the last recorded value to evaluate conditions.

Example

  1. A condition specifies HTTP latency of two seconds or higher for five consecutive minutes.
  2. For three consecutive minutes, HTTP latency is three seconds.
  3. For two consecutive minutes, there are no HTTP requests. In this case, a condition carries forward the last measurement (three seconds) for these two minutes.
  4. After a total of five minutes the policy triggers, even though there has been no data for the last two minutes.

Missing or delayed metric data can result in policies not alerting and incidents not closing. Delays in data arriving from third-party cloud providers can be as high as 30 minutes, with 5-15 minute delays being the most common. A lengthy delay—longer than the duration window—can cause conditions to enter an "unknown" state. When the data finally arrives, Cloud Monitoring might have lost some of the recent history of the conditions. Later inspection of the timeseries data might not reveal this problem because there is no evidence of delays once the data arrives.

You can minimize these problems by doing any of the following:

  • Contact your third-party cloud provider to see if there is a way to reduce metric collection latency.
  • Use longer duration windows in your conditions. This has the disadvantage of making your alerting policies less responsive.
  • Choose metrics that have a lower collection delay:

    • Monitoring agent metrics, especially when the agent is running on VM instances in third-party clouds.
    • Custom metrics, when you write their data directly to Cloud Monitoring.
    • Logs-based metrics, if logs collection is not delayed.

For more information, see Monitoring Agent Overview, Using Custom Metrics, and Logs-based Metrics.

Incidents per policy

An alerting policy can apply to many resources, and a problem affecting all resources can trigger the policy and open incidents for each resource. An incident is opened for each time series that violates a condition.

To prevent overloading the system, the number of incidents that a single policy can open simultaneously is limited to 5000.

For example, if a policy applies to 2000 (or 20,000) Compute Engine instances, and something causes each to violate the alerting conditions, then only 5000 incidents are opened. The remaining violations are ignored until some of the open incidents for that policy resolve.

Notifications per incident

A notification is sent out for each incident triggered by the alerting policy. Another notification is sent for each incident when it is resolved. An incident is opened for each time series that violates a condition, so each new incident results in a notification.

If a policy contains only one condition, then one notification is sent when the incident is initially opened, even if subsequent measurements continue to meet the condition.

If a policy contains multiple conditions, it may send multiple notifications depending on how you set up the policy:

  • If a policy triggers only when all conditions are met, then the policy sends a notification only when an incident initially opens.

  • If a policy triggers when any condition is met, then the policy sends a notification each time a new combination of conditions is met. For example:

    1. ConditionA is met, and an incident opens and a notification is sent
    2. The incident is still open when a subsequent measurement meets both ConditionA and ConditionB. In this case, the incident remains open and another notification is sent.

Notification latency

Notification latency is the delay from the time a problem first starts until the time a policy is triggered.

The following events and settings contribute to the overall notification latency:

  • Metric collection delay: The time Cloud Monitoring needs to collect metric values. For Google Cloud values, this is typically negligible. For AWS CloudWatch metrics, this can be several minutes. For uptime checks, this can be an average of 2 minutes (from the end of the duration window).

  • Duration window: The window configured for the condition. Note that conditions are only met if a condition is true throughout the duration window. For example, if you specify a five-minute window, the notification is delayed at least five minutes from when the event first occurs.

  • Time for notification to arrive: Notification channels such as email and SMS themselves may experience network or other latencies (unrelated to what's being delivered), sometimes approaching minutes. On some channels—such as SMS and Slack—there is no guarantee that the messages are be delivered.

What's next