Creating distribution metrics

This page explains how to create distribution-type logs-based metrics using the GCP Console and the Stackdriver Logging API.

Overview

Distribution metrics require both a filter to select the relevant log entries and a value extractor to grab the numeric value for the distribution. The value extractor is the same kind as is used for user-defined labels.

A distribution metric records the statistical distribution of the extracted values in histogram buckets. The extracted values are not recorded individually, but their distribution across the configured buckets are recorded, along with the count, mean, and sum of squared deviation of the values. You can use the default layout of histogram buckets in your distribution or you can fine-tune the buckets' boundaries to approximately capture the values.

Creating a distribution metric

Logging Console

Follow these steps to create a logs-based metric in the GCP Console:

  1. Click on Stackdriver Logging > Logs-based Metrics in the left-side navigation list in the GCP Console, or click the following button:

    Go to the Logs Viewer

  2. Use the drop-down menu at the top of the page to select a project.

  3. Click Create Metric at the top of the page. You see the Metric Editor at the right side of the page and the viewer panel showing your logs on the left side:

    Create distribution metric

  4. In the viewer panel, create a filter that shows only the log entries that you want to count in your metric.

    The basic filter is available in the drop-down menus above the log entries. Alternatively, you can access the advanced filter interface by clicking on the drop-down menu in the right-hand side of the search bar and selecting Convert to advanced filter.

    For more information, see The user interfaces.

  5. In the Metric Editor panel, set the following fields:

    • Name: Choose a name that is unique among logs-based metrics in your project. Some naming restrictions apply; see Troubleshooting for details.
    • Description: Describe the metric.
    • Labels: (Optional) Add labels by clicking Add Item for each label. For details about defining labels, see Logs-based metric labels.
    • Units: (Optional) For distribution metrics, you can optionally enter units, such as s, ms, etc. For more information, see the unit field of the MetricDescriptor.
    • Type: Distribution.
    • Field name: Enter the log entry field that contains the distribution's value. You are offered choices as you type. For example:

      jsonPayload.latency
      
    • Extraction expression: (Optional) If Field name always contains a numeric value convertible to type double, then you can leave this field empty. Otherwise, specify a regular expression that extracts the numeric distribution value from the field value. The Build menu can be used to interactively build and verify the regular expression.

      Example. Suppose that your latency log entry field contains a number followed by ms for milliseconds. The following regexp chooses the number without the unit suffix:

      ([0-9.]+)ms
      

      The parentheses, known as a regexp capture group, identifies the part of the text match that will be extracted. For more information on regular expressions, see RE2 syntax.

    • More (Histogram buckets): (Optional) Clicking More opens a section of the form you can use to specify custom bucket layouts. If you do not specify your bucket layouts, a default bucket layout is provided. For more information, see Histogram buckets on this page.

  6. Click Create Metric.

Logging API

To create a distribution metric, use the projects.metrics.create method of the Stackdriver Logging API. You can try out the method in the [APIs Explorer][apis-explorer-create]. Prepare the arguments to the method as follows:

  1. Set the parent field to the project in which the metric is to be created:

    projects/[PROJECT_ID]
    
  2. Set the request body to a LogMetric object. Following is an example of the object for a distribution metric:

    {
      name:        "my-metric"
      description: "Description of my-metric."
      filter:      "resource.type=gce_instance AND logName:\"logs/syslog\"",
      valueExtractor: "REGEX_EXTRACT(jsonPayload.latencyField, \"([0-9.]+)ms\")",
    
      labelExtractors: {
        "my-label-1":
          "REGEXP_EXTRACT(jsonPayload.someField, \"before ([[:word:]]+) after\")",
        "my-label-2":
          "EXTRACT(jsonPayload.anotherField, \"before ([0-9]+) after\")",
      },
      bucketOptions: { [SEE_BELOW] },
    
      metricDescriptor: {
          metricKind: DELTA,
          valueType: DISTRIBUTION,
          unit: "ms",
    
          labels: [
            {
              key: "my-label-1",
              valueType: STRING,
              description: "Description of string my-label-1.",
            },
            {
              key: "my-label-2",
              valueType: INT64,
              description: "Description of integer my-label-2.",
            }
          ]
      },
    }
    

Notes for distribution metrics:

  • Some naming restrictions apply; see Troubleshooting for details.

  • metricDescriptor: a MetricDescriptor object. metricKind must be DELTA. valueType must be DISTRIBUTION.

Histogram buckets

Distribution metrics include a histogram that counts the number of values that fall in specified ranges (buckets). You can have up to 200 buckets in a distribution metric.

Each bucket has two boundary values, L and H, that define the lowest and highest values covered by the bucket. The width of the bucket is H - L. Since there cannot be gaps between buckets, the lower boundary of one bucket is the same as the higher boundary of the previous bucket, and so forth. So that the boundaries don't fall into more than one bucket, a bucket includes its lower boundary; its higher boundary belongs to the next bucket.

All bucket layouts can be specified by listing, in increasing order, the boundary values between individual buckets. The first bucket is the underflow bucket, which counts values less than the first boundary. The last bucket is the overflow bucket, which counts values greater than or equal to the last boundary. The other buckets count values greater than or equal to their lower boundary and less than their upper boundary. If there are n boundary values, then there are n+1 buckets. Excluding the underflow and overflow buckets, there are n-1 finite buckets.

There are three different ways to specify the boundaries between histogram buckets for distribution metrics. You either specify a formula for the boundary values, or you list the boundary values:

  • Linear(offset, width, i): Every bucket has the same width. The boundaries are offset + width * i, for i=0,1,2,...,N. For more information on linear buckets, see the API reference.

  • Exponential(scale, growth_factor, i): Bucket widths increase for higher values. The boundaries are scale * growth_factor^i, for i=0,1,2,...,N. For more information on exponential buckets, see the API reference.

  • Explicit: You list all the boundaries for the buckets in the bounds array. Bucket i has these boundaries:

    Upper bound: bounds[i] for (0 <= i < N-1)
    Lower bound: bounds[i - 1] for (1 <= i < N)

    For more information on explicit buckets, see the API reference.

How you specify your histogram buckets is explained in the following section:

Logging Console

The Histogram buckets submenu opens when you create a distribution metric and you click More in the Metric editor form. The subform is shown below for the Linear bucket layout:

Histogram buckets

Linear buckets: Fill in the histogram bucket form as follows.

  • Type: Linear
  • Start value (a): The lower boundary of the first finite bucket. This value is called offset in the API.
  • Number of buckets (N): The number of finite buckets. The value must be greater than or equal to 0.
  • Bucket width (b): The difference between the upper bound and lower bound in each finite bucket. The value must be greater than 0.

For example, if the start value is 5, the number of buckets is 4, and the bucket width is 15, then the bucket ranges are as follows:

[-INF, 5), [5, 20), [20, 35), [35, 50), [50, 65), [65, +INF]

Explicit buckets: Fill in the histogram bucket form as follows:

  • Type: Explicit
  • Bounds (b): A comma-separated list of the boundary values of the finite buckets. This also determines the number of buckets and their widths.

For example, if the list of boundaries is:

0, 1, 2, 5, 10, 20

then there are five finite buckets with the following ranges:

[-INF, 0), [0, 1), [1, 2), [2,5), [5, 10), [10, 20), [20, +INF]

Exponential buckets: Fill in the histogram bucket form as follows:

  • Type: Exponential
  • Number of buckets (N): The total number of finite buckets. The value must be greater than 0.

  • Linear scale (a): The linear scale for the buckets. The value must be greater than 0.

  • Exponential growth factor (b): The exponential growth factor for the buckets. The value must be greater than 1.

For example, if N=4, a=3, and b=2, then the bucket ranges are as follows:

[-INF, 3), [3, 6), [6, 12), [12, 24), [24, 48), [48, +INF]

For more information about the buckets, see BucketOptions in the Stackdriver Monitoring API.

Logging API

The optional bucket layout is specified by the bucketOptions field in the LogMetric object supplied to projects.metrics.create. For the complete LogMetric object, see Creating a distribution metric on this page. The additions for bucket layouts are shown below:

Linear buckets:

{ # LogMetric object
  ...
  bucketOptions: {
    linearBuckets: {
      numFiniteBuckets: 4,
      width: 15,
      offset: 5
    }
  },

The previous sample creates the following buckets:

[-INF, 5), [5, 20), [20, 35), [35, 50), [50, 65), [65, +INF]

Explicit buckets: Boundaries are listed individually.

{ # LogMetric object
  ...
  bucketOptions: {
    explicitBuckets: {
      bounds: [0, 1, 2, 5, 10, 20 ]
    }
  },

The previous sample creates the following buckets:

[-INF, 0), [0, 1), [1, 2), [2, 5), [5, 10), [10, 20), [20, +INF]

Exponential buckets: Boundaries are scale * growthFactor ^ i, for i=0,1,2, ..., numFiniteBuckets

{ # LogMetric object
  ...
  bucketOptions: {
    exponentialBuckets: {
      numFiniteBuckets: 4,
      growthFactor: 2,
      scale: 3
    }
  },
  ...
}

The previous sample creates the following buckets:

[-INF, 3), [3, 6), [6, 12), [12, 24), [24, 48), [48, +INF]

New metric latency

Your new metric appears in the Logs Viewer's list of metrics and in the relevant Stackdriver Monitoring menus right away. However, it may take up to a minute for the metric to start collecting data for the matching log entries.

Inspecting distribution metrics

To list the logs-based metrics in your GCP project, or to inspect a particular metric in your project, do the following:

Logging Console

Go to the Logs-Based Metrics page, which lists all of the logs-based metrics in the current project, by clicking the following button:

Go to Logs-based Metrics

To view the data in a logs-based metric, select View in Metrics Explorer from the overflow menu at the right side of the metric's listing.

Logging API

Listing metrics

To list the user-defined logs-based metrics in a project, use the projects.metrics.list API method. Fill in the parameters to the method as follows:

  • parent: The resource name of the project: projects/[PROJECT_ID].
  • pageSize: The maximum number of results.
  • pageToken: Gets the next page of results. See projects.metrics.list.

Retrieving metric definitions

To retrieve a single user-defined logs-based metric, use the projects.metrics.get API method. Fill in the parameters to the method as follows:

  • metricName: The resource name of the metric:

    projects/[PROJECT_ID]/metrics/[METRIC_ID]
    

Reading metric data

To read the time series data in a logs-based metric, use the projects.timeseries.list in the Stackdriver Monitoring API. For details, see Reading Time Series. Here is the information you need for logs-based metrics:

  • The metric type is logging.googleapis.com/user/[METRIC_ID].

Cloud SDK

To list the user-defined logs-based metrics in your project, use the following command:

gcloud logging metrics list

To display a user-defined logs-based metric in your project, use the following command:

gcloud logging metrics describe [METRIC_NAME]

For more details, use the following command:

gcloud logging metrics --help

You cannot read a metric's time series data from the Cloud SDK.

Updating distribution metrics

You can update a logs-based metric to change the description, filters, and the names of fields referenced in the metric. You can add new labels to the metric and you can change the regular expressions used to extract values for the metric and its labels.

You cannot change the names or types of logs-based metrics or their labels, and you cannot delete existing labels in a logs-based metric.

To edit or update a logs-based metric, do the following:

Logging Console

  1. Go to the Logs-Based Metrics page:

    Go to Logs-based Metrics

  2. Click Edit metric in the menu at the right side of the logs-based metric that you want to edit.

  3. Change only the allowable items in the metric.

  4. Click Done.

Logging API

To edit or update a logs-based metric, use the projects.metrics.update method in the API. Set the fields as follows:

  • metricName: The full resource name of the metric:

    projects/[PROJECT_ID]/metrics/[METRIC_ID]
    

    For example:

    projects/my-gcp-project/metrics/my-error-metric
    
  • In the request body, include a LogMetric object that is exactly the same as the existing metric except for the changes and additions you want to make.

Cloud SDK

You can only change the description and the filter for an existing metric using the Cloud SDK.

To update a logs-based metric, use the following command. You can specify either or both of the flags:

gcloud logging metrics update [METRIC_NAME] --description=[DESCRIPTION] --log-filter=[FILTER]

For more details, use the following command:

gcloud logging metrics update --help

Deleting distribution metrics

To delete a logs-based metric, do the following:

Logging Console

  1. Go to the Logs-Based Metrics page:

    Go to Logs-based Metrics

  2. Select the metrics you want to delete and Click Delete at the top of the page.

    Alternatively, click Delete metric in the menu at the right side of the logs-based metric that you want to delete.

Logging API

Use the projects.metrics.delete method in the API.

Cloud SDK

Use the following command to delete a user-defined logs-based metric in the current project:

gcloud logging metrics delete [METRIC_NAME]

For more details, use the following command:

gcloud logging metrics delete --help

Was this page helpful? Let us know how we did:

Send feedback about...

Stackdriver Logging