Sample policies

This page provides a collection of specific of alerting policies that can be used for inspiration and to bootstrap policies of your own design.

Representing policies in JSON or YAML format

You can represent alerting policies in two data formats: JSON and YAML. Cloud SDK can read and write both formats. The REST API can read JSON.

To generate YAML (the default) representations of your existing alerting policies and notification channels, use the gcloud alpha monitoring policies list and describe commands, or the gcloud alpha monitoring channels list and describe commands, respectively.

For example, this command retrieves a single policy and captures the output in the file test-policy.yaml:

gcloud alpha monitoring policies describe projects/a-gcp-project/alertPolicies/12669073143329903307 > test-policy.yaml

To generate JSON representations of your existing alerting policies and notification channels:

Copying policies

As illustrated in the backup/restore example, you can use saved policies to create new copies of those policies. You can also use them as a starting point for creating similar policies.

You can use a policy saved in one project to create a new, or similar, policy in another project. However, you must first make the following changes in a copy of the saved policy:

  • Remove the following fields from any notification channels:
    • name
    • verificationStatus
  • Create notification channels before referring to the channels in alerting policies (you need the new channel identifiers).
  • Remove the following fields from any alerting policies you are recreating:
    • name
    • condition.name
    • creationRecord
    • mutationRecord

Policy samples

The policies here are organized using the same terminology that the Stackdriver Monitoring console uses, for example, “rate-of-change policy”, but there are really only two types of conditions underlying all these classifications:

  • A threshold condition; almost all of the policy types mentioned in the UI are variants on a threshold condition
  • An absence condition

In the samples here, these are indicated by conditionThreshold and conditionAbsent conditions. See the reference page for Condition for more information.

Metric-threshold policy

A metric-threshold policy is one that detects when some value crosses a predetermined boundary. Threshold policies let you know that something is approaching an important point, so you can take some action. For example, when available disk space falls below 10 percent of total disk space, and your system may run out of disk space soon.

The following policy uses average CPU usage as an indicator of the health of a group of VMs. It causes an alert when the average CPU utilization of a group of VMs in an instance and zone, measured over 60-second intervals, exceeds a threshold of 90-percent utilization for 15 minutes (900 seconds):

{
    "displayName": "Very high CPU usage",
    "combiner": "OR",
    "conditions": [
        {
            "displayName": "CPU usage is extremely high",
            "conditionThreshold": {
                "aggregations": [
                    {
                        "alignmentPeriod": "60s",
                        "crossSeriesReducer": "REDUCE_MEAN",
                        "groupByFields": [
                            "project",
                            "resource.label.instance_id",
                            "resource.label.zone"
                        ],
                        "perSeriesAligner": "ALIGN_MAX"
                    }
                ],
                "comparison": "COMPARISON_GT",
                "duration": "900s",
                "filter": "metric.type=\"compute.googleapis.com/instance/cpu/utilization\"
                           AND resource.type=\"gce_instance\"",
                "thresholdValue": 0.9,
                "trigger": {
                    "count": 1
                }
            }
        }
    ],
}

Metric-absence policy

A metric-absence policy is triggered when no data is written to a metric for the specified duration.

One way to demonstrate this is to create a custom metric that nothing will ever write to. You don't need a custom metric for this kind of policy, but for demonstration purposes, it's easy to ensure nothing actually uses it.

Here's a sample descriptor for a custom metric. You could create the metric using the APIs Explorer.

{
  "description": "Number of times the pipeline has run",
  "displayName": "Pipeline runs",
  "metricKind": "GAUGE",
  "type": "custom.googleapis.com/pipeline_runs",
  "labels": [
    {
      "description": "The name of the pipeline",
      "key": "pipeline_name",
      "valueType": "STRING"
    },
  ],
  "unit": "1",
  "valueType": "INT64"
}

See Using custom metrics for more information.

The following alerting policy is triggered if no data is written to this metric for a period of approximately an hour: in other words, your hourly pipeline has failed to run. Note that the condition used here is conditionAbsent.

{
    "displayName": "Data ingestion functioning",
    "combiner": "OR",
    "conditions": [
        {
            "displayName": "Hourly pipeline is up",
            "conditionAbsent": {
                "duration": "3900s",
                "filter": "resource.type=\"global\"
                           AND metric.type=\"custom.googleapis.com/pipeline_runs\"
                           AND metric.label.pipeline_name=\"hourly\"",
            }
        }
    ],
}

Rate-of-change policy

This policy alerts you when the rate of CPU utilization is increasing rapidly:

{
  "displayName": "High CPU rate of change",
  "combiner": "OR",
  "conditions": [
    {
      "displayName": "CPU usage is increasing at a high rate",
      "conditionThreshold": {
         "aggregations": [
           {
             "alignmentPeriod": "900s",
             "perSeriesAligner": "ALIGN_PERCENT_CHANGE",
           }],
        "comparison": "COMPARISON_GT",
        "duration": "180s",
        "filter": "metric.type=\"compute.googleapis.com/instance/cpu/utilization\" AND resource.type=\"gce_instance\"",
        "thresholdValue": 0.5,
        "trigger": {
          "count": 1
         }
      }
    }
  ],
}

Group-aggregate policy

This policy alerts you when the average CPU utilization across a Google Kubernetes Engine cluster exceeds a threshold:

{
    "displayName": "CPU utilization across GKE cluster exceeds 10 percent",
    "combiner": "OR",
    "conditions": [
         {
            "displayName": "Group Aggregate Threshold across All Instances in Group GKE cluster",
            "conditionThreshold": {
                "filter": "group.id=\"3691870619975147604\" AND metric.type=\"compute.googleapis.com/instance/cpu/utilization\" AND resource.type=\"gce_instance\"",
                "comparison": "COMPARISON_GT",
                "thresholdValue": 0.1,
                "duration": "300s",
                "trigger": {
                    "count": 1
                },
                "aggregations": [
                    {
                        "alignmentPeriod": "60s",
                        "perSeriesAligner": "ALIGN_MEAN",
                        "crossSeriesReducer": "REDUCE_MEAN",
                        "groupByFields": [
                              "project",
                              "resource.label.instance_id",
                              "resource.label.zone"
                        ]
                    },
                    {
                        "alignmentPeriod": "60s",
                        "perSeriesAligner": "ALIGN_SUM",
                        "crossSeriesReducer": "REDUCE_MEAN"
                    }
                ]
            },
        }
    ],
}

This policy assumes the existence of the following group:

{
    "name": "projects/a-gcp-project/groups/3691870619975147604",
    "displayName": "GKE cluster",
    "filter": "resource.metadata.name=starts_with(\"gke-kuber-cluster-default-pool-6fe301a0-\")"
}

To identify the equivalent fields for your groups, list your group details using the API Explorer on the project.groups.list reference page.

Uptime-check policy

The status of uptime checks appears on the Stackdriver Monitoring console, but you can use an alerting policy to notify you directly if the uptime check fails.

For example, the following JSON describes an uptime check on the Google Cloud site. It checks the availability every 5 minutes.

The uptime check was created with the Stackdriver Monitoring console. The JSON representation here was created by listing the uptime checks in the project using the Monitoring API; see uptimeCheckConfigs.list. You can also create uptime checks with the Monitoring API.

{
    "name": "projects/a-gcp-project/uptimeCheckConfigs/uptime-check-for-google-cloud-site",
    "displayName": "Uptime check for Google Cloud site",
    "monitoredResource": {
        "type": "uptime_url",
        "labels": {
            "host": "cloud.google.com"
      }
    },
    "httpCheck": {
        "path": "/index.html",
        "port": 80,
        "authInfo": {}
    },
    "period": "300s",
    "timeout": "10s",
    "contentMatchers": [
        {}
    ]
}

To create an alerting policy for an uptime check, refer to the uptime check by its UPTIME_CHECK_ID. This ID is set when the check is created; it appears as the last component of the name field and isn't visible in the UI. If you are using the Monitoring API, the uptimeCheckConfigs.create method returns the ID.

The ID is derived from the displayName, which was set in the UI in this case. The can be verified by listing the uptime checks and looking at the name value.

The ID for the uptime check previously described is uptime-check-for-google-cloud-site.

The alerting policy below triggers if the uptime check fails, and it sends a notification to the specified notification channel:

{
    "displayName": "Google Cloud site uptime failure",
    "combiner": "OR",
    "conditions": [
        {
            "displayName": "Failure of uptime check_id uptime-check-for-google-cloud-site",
            "conditionThreshold": {
                "aggregations": [
                    {
                        "alignmentPeriod": "1200s",
                        "perSeriesAligner": "ALIGN_NEXT_OLDER",
                        "crossSeriesReducer": "REDUCE_COUNT_FALSE",
                        "groupByFields": [ "resource.label.*" ]
                    }
                ],
                "comparison": "COMPARISON_GT",
                "duration": "600s",
                "filter": "metric.type=\"monitoring.googleapis.com/uptime_check/check_passed\"
                           AND metric.label.check_id=\"uptime-check-for-google-cloud-site\"
                           AND resource.type=\"uptime_url\"",
                "thresholdValue": 1,
                "trigger": {
                    "count": 1
                }
            }
        }
    ],
}

The filter in the alerting policy specifies the metric that is being monitored by its type and label. The metric type is monitoring.googleapis.com/uptime_check/check_passed. The metric label identifies the specific uptime check that is being monitored. In this example, the label field check_id contains the uptime check ID.

AND metric.label.check_id=\"uptime-check-for-google-cloud-site\"

See Monitoring filters for more information.

Process-health policy

A process-health policy can notify you if the number of processes that match a pattern crosses a threshold. This can be used to tell you, for example, that a process has stopped running.

This policy sends a notification to the specified notification channel when no process matching the string nginx, running as user www, has been available for more than 5 minutes:

{
    "displayName": "Server health",
    "combiner": "OR",
    "conditions": [
        {
            "displayName": "Process 'nginx' is not running",
            "conditionThreshold": {
                "filter": "select_process_count(\"has_substring(\\\"nginx\\\")\", \"www\") AND resource.type=\"gce_instance\"",
                "comparison": "COMPARISON_LT",
                "thresholdValue": 1,
                "duration": "300s"
            }
        }
    ],
}

For another example, see Process health.

Metric ratio

With the API, you can create a policy that computes the ratio of two related metrics and fires when that ratio crosses a threshold.

A ratio condition is a variant on a simple threshold condition, where the condition in a ratio policy uses two filters: the usual filter, which acts as the numerator of the ratio, and a denominatorFilter, which acts as the denominator of the ratio.

The time series from both filters must be aggregated in the same way, so that the computation of the ratio of the values is meaningful. The alerting policy is triggered if the ratio of the two filters violates a threshold value for the specified duration.

Ratio of HTTP errors

The following policy creates a threshold condition built on the ratio of the count of HTTP error responses to the count of all HTTP responses.

{
    "displayName": "HTTP error count exceeds 50 percent for App Engine apps",
    "combiner": "OR",
    "conditions": [
        {
            "displayName": "Ratio: HTTP 500s error-response counts / All HTTP response counts",
            "conditionThreshold": {
                 "filter": "metric.label.response_code>=\"500\" AND
                            metric.label.response_code<\"600\" AND
                            metric.type=\"appengine.googleapis.com/http/server/response_count\" AND
                            project=\"a-gcp-project\" AND
                            resource.type=\"gae_app\"",
                 "aggregations": [
                    {
                        "alignmentPeriod": "300s",
                        "crossSeriesReducer": "REDUCE_SUM",
                        "groupByFields": [
                          "project",
                          "resource.label.module_id",
                          "resource.label.version_id"
                        ],
                        "perSeriesAligner": "ALIGN_DELTA"
                    }
                ],
                "denominatorFilter": "metric.type=\"appengine.googleapis.com/http/server/response_count\" AND
                                      project=\"a-gcp-project\" AND
                                      resource.type=\"gae_app\"",
                "denominatorAggregations": [
                   {
                      "alignmentPeriod": "300s",
                      "crossSeriesReducer": "REDUCE_SUM",
                      "groupByFields": [
                        "project",
                        "resource.label.module_id",
                        "resource.label.version_id"
                       ],
                      "perSeriesAligner": "ALIGN_DELTA",
                    }
                ],
                "comparison": "COMPARISON_GT",
                "thresholdValue": 0.5,
                "duration": "0s",
                "trigger": {
                    "count": 1
                }
            }
        }
    ]
}

The metric and resource types

The metric type for this policy is appengine.googleapis.com/http/server/response_count, which has two labels:

  • response_code, an 64-bit integer representing the HTTP status code for the request. This policy filters time-series data on this label, so it can determine the following:
    • The number of responses received.
    • The number of error responses received.
    • The ratio of error responses to all responses.
  • loading, a boolean value that indicates whther the request was loading. The loading label is irrelevant in this alerting policy.

The alerting policy is concerned with response data from App Engine apps, that is, data originating from the monitored-resource type gae_app. This monitored resource has three labels:

  • project_id, the ID for the Google Cloud project.
  • module_id, the name of the service or module in the app.
  • version_id, the version of the app.

For reference information on these metric and monitored-resource types, see App Engine metrics in the list of metrics and the gae_app entry in the list of monitored resources.

What this policy does

This policy computes the ratio of error responses to total responses. The policy triggers an alert notification if the ratio goes above 50% (that is, the ratio is greater than 0.5) over the 5-minute alignment period.

This policy captures the module and version of the app that violates the condition by grouping the time series in each filter by the values of those labels.

  • The filter in the condition looks at HTTP responses from an App Engine app and selects those responses in the error range, 5xx. This is the numerator in the ratio.
  • The denominator filter in the condition looks at all HTTP responses from an App Engine app.

The policy triggers the alert notification immediately; the permitted duration for the condition is 0 seconds. This policy uses a trigger count of 1, which is the number of time series that needs to violate the condition to trigger the alert notification. For an App Engine app with a single service, a trigger of 1 is fine. If you have an app with 20 services and you want to trigger an alert if 3 or more services violate the condition, use a trigger count of 3.

Setting up a ratio

The numerator and denominator filters are exactly the same except that the condition filter in the numerator matches response codes in the error range, and the condition filter in the denominator matches all response codes. The following clauses appear only in the numerator condition:

      metric.label.response_code>="500" AND
      metric.label.response_code<"600"

Otherwise, the numerator and denominator filters are the same.

The time series selected by each filter must be aggregated in the same way to make the computation of the ratio valid. Each filter might collect multiple time series, since there will be a different time series for each combination of values for labels. This policy groups this set of time series by specifed resource labels, which partitions the set of time series into a set of groups. Some of the time series in each group match the numerator filter; the rest match the denominator filter.

To compute a ratio, the set of time series that matches each filter must be aggregated down to a single time series each. This leaves each group with two time series, on for the numerator and one for the denominator. Next, the ratio of points in the numerator and denominator time series in each group can be computed.

In this policy, the time series for both filters are aggregated as follows:

  • Each filter creates a number of time series aligned at 5-minute intervals, with values represented computing ALIGN_DELTA on the values in that 5-minute alignment interval. This aligner returns the number of matching responses in that interval as a 64-bit integer.

  • The time series within each filter are also grouped by the values of the resource labels for module and version, so each group with contain two sets of aligned time series, those matching the numerator filter and those matching the denominator filter.

  • The time series within each group matching the numerator or denominator filter are aggregated down to a single time by summing the values in the individual time series by using the REDUCER_SUM cross-series reducer. This results in one time series for the numerator and one for the denominator, each reporting the number of responses across all matching time series in the alignment interval.

The policy then computes, for the numerator and denominator time series representing each group, the ratio of the values. Once the ratio is available, this policy is a simple metric-threshold policy.

هل كانت هذه الصفحة مفيدة؟ يرجى تقييم أدائنا:

إرسال تعليقات حول...

Stackdriver Monitoring