Overview of logs-based metrics

This page provides a conceptual overview of logs-based metrics.

Logs-based metrics are Stackdriver Monitoring metrics that are based on the content of log entries. For example, the metrics can record the number of log entries containing particular messages, or they can extract latency information reported in log entries. You can use logs-based metrics in Stackdriver Monitoring charts and alerting policies.

System (logs-based) metrics are predefined by Logging. These metrics record the number of logging events that occurred within a specific time period.

User-defined (logs-based) metrics are created by a user on a project. They count the number of log entries that match a given query, or keep track of particular values within the matching log entries.

Logging accumulates information for logs-based metrics each time it receives a matching log entry. Logging writes a new data point to the metric's time series at the rate of 1 datapoint per minute, making the data available to Stackdriver Monitoring.

The data for logs-based metrics comes from log entries received after the metrics are created. The metrics are not populated with data from log entries that are already in Logging.

Getting started

  1. Go to the Stackdriver Logging > Logs-based metrics page in the Cloud Console:

    Go to the Logs-based metrics page

  2. Select an existing Google Cloud project at the top of the page. The lists of logs-based metrics in your Google Cloud project appear:

The user interface showing the logs-based metrics lists.

Access control

Cloud Identity and Access Management roles and permissions govern access to Google Cloud data. Following is a summary of the common roles and permissions a Google Cloud project member needs to access logs-based metrics:

  • Logging/Logs Configuration Writer (roles/logging.configWriter) lets you list, create, get, update, and delete logs-based metrics.

  • Logging/Logs Viewer (roles/logging.viewer) lets you view existing metrics. You can also add the logging.logMetrics.get and logging.logMetrics.list permissions to a custom role.

  • Monitoring Viewer (roles/monitoring.viewer) lets you read the timeseries that the logs-based metric contains. You can also add the monitoring.timeSeries.list permission to a custom role.

  • Logging Admin (roles/logging.admin), Editor (roles/editor), and Owner (roles/owner) are broad-level roles that contain the permission to create logs-based metrics (logging.logMetrics.create).

For more information, go to Logging: Access control.

Logs-based metrics interface

The logs-based metrics interface is divided into two metric-type areas: System metrics and User-defined metrics.

Both areas contain a table summary of the metrics. Each table row has a menu that features the following options:

The logs-based metrics lists showing the overflow menu.

  • View in Metrics Explorer lets you view the data for a system logs-based metric by opening the Stackdriver Monitoring Metrics Explorer.

    You can use the Metrics Explorer to specify a target metric for an alerting policy. The chart next to the Target region gives you visual feedback on the data being captured by the target metric.

  • Create alert from metric lets you create an alerting policy based on your logs-based metric.

    Selecting this option opens the Stackdriver Monitoring console, where you can create, edit, and manage alerting policies. For details, read Creating an alerting policy.

User-defined metrics interface

The User-defined metrics area of the Logs-based metrics interface has several additional features to help you manage the user-defined metrics on a project:

  • The user-defined metrics table includes Name, Description, Type, and Filter columns. These are specified when the metric is created.

  • The Filter Metrics box lets you filter your metric list by text search or metric Name, Description, and Filter:

The logs-based metrics lists showing the filtering options.

  • The user-defined metrics table includes columns for Previous Month Usage and Usage (MTD). These usage metrics are useful, for example, if you want to determine which metrics ingest the most data or to estimate your bills.

  • You can Edit metric and Delete metric using the menu at the end of a table row.

In addition, clicking on any of the column names lets you sort data in ascending or descending order. At the bottom of the table, you can also select the number of rows that you wish to display.

For more information on managing your user-defined metrics using the Cloud Console, read Creating counter metrics and Creating distribution metrics.

Overview of logs-based metric types

Logging logs-based metrics can be one of two metric types: counter or distribution. All system logs-based metrics are the counter type. User-defined logs-based metrics can be either the counter type or the distribution type.

Each data point in a logs-based metric's time series represents only the additional information (the delta) received since the previous data point.

The following sections describe the characteristics of counter-type and distribution-type metrics.

Counter metrics

Counter metrics count the number of log entries matching an advanced logs query. For example, you can do the following:

  • Create a metric that counts log entries containing a certain specific error message.
  • Count the number of times each user invokes an operation, by looking for log messages like this:

    ... user [USERNAME] called  [OPERATION] ...

    By extracting [USERNAME] and [OPERATION] and using them as values for two labels, you can later ask, "How many times did sally call the UPDATE operation?", "How many people called the READ operation?", "How many times did george call an operation?", and so on.

For more information, go to Creating counter metrics.

Distribution metrics

Distribution metrics accumulate numeric data from log entries matching a query. The metrics contain a time series of Distribution objects, each of which contains the following:

  • A count of the number of values in the distribution.
  • The mean of the values.
  • The sum of squared deviations: Sumi=1..n(xi–mean)2
  • A set of histogram buckets with the count of values in each bucket. You can use the default bucket layout or choose your own.

A common use for distribution metrics is to track latencies. As each log entry is received, a latency value is extracted from somewhere in the log entry and is added to the distribution. At regular intervals, the accumulated distribution is written to Stackdriver Monitoring.

For more information, go to Creating distribution metrics.


Logs-based metrics can optionally have labels, which allow a single metric to hold multiple time series. Values for the labels are extracted from fields in the matching log entries. Logging records separate time series for each different value of your label.

The system logs-based metrics have predefined labels. You can define your own labels for your user-defined metrics. For more information, read Logs-based metric labels.

System logs-based metrics

Logging provides some predefined counter metrics that track the number and volume of log entries received. The metrics have labels that record the counts by log name and severity level. The following table lists the metrics:

Metric name and description Type Value Labels

The total number of log entries received.

delta int64 log: The name of the log. Example: "appengine.googleapis.com/request_log".

severity: The severity of the log entries. Example: "ERROR".


The total number of bytes received in log entries.

delta int64 log: The name of the log. Example: "appengine.googleapis.com/request_log".

The total number of log entries that were excluded.

delta int64

The total number of bytes in log entries that were excluded.

delta int64

The total number of log entries that were exported using sinks.

delta int64

The total number of bytes in log entries that were exported using sinks.

delta int64

The total number of log entries that were not exported due to export configuration errors.

delta int64

The number of late-arriving log entries.1

delta int64 log: The name of the log. Example: "appengine.googleapis.com/request_log".

1 The late-arriving log entries are not included in the log_entry_count or byte_count metrics.

For a full list of system logs-based metrics, go to Logging metrics.

Stackdriver Monitoring

You can use both system and user-defined logs-based metrics in Stackdriver Monitoring to create charts and alerting policies. Your user-defined logs-based metric names are prefixed by user/; the system logs-based metrics are not.

If you are using the Stackdriver Monitoring API, the logs-based metrics names are as follows:


For more information, go to Creating charts and alerts.


Metric is missing logs data

There are several possible reasons for missing data in logs-based metrics:

  • New log entries might not match your metric's logs query. A logs-based metric gets data from matching log entries that are received after the metric is created. Logging does not backfill the metric from previous log entries.

  • New log entries might not contain the correct field, or the data might not be in the correct format for extraction by your distribution metric. Check that your field names and regular expressions are correct.

  • Your metric counts might be delayed. Even though countable log entries appear in the Logs Viewer, it takes up to a minute to update the logs-based metrics in Stackdriver Monitoring.

  • The log entries that are displayed might be counted late or might not be counted at all, because they are time-stamped too far in the past or future. If a log entry is received by Stackdriver Logging more than 24 hours in the past or 10 minutes in the future, then the log entry won't be counted in the logs-based metric.

    The number of late-arriving entries is recorded for each log in the system logs-based metric logging.googleapis.com/logs_based_metrics_error_count.

    Example: A log entry matching a logs-based metric arrives late. It has a timestamp of 2:30 PM on February 20, 2020 and a receivedTimestamp of 2:45 PM on February 21, 2020. This entry won't be counted in the logs-based metric.

Metric has too many time series

The number of time series in a metric depends on the number of different combinations of label values. The number of time series is called the cardinality of the metric, and it must not exceed 30,000.

Because you can generate a time series for every combination of label values, if you have one or more labels with high number of values, it is not difficult to exceed 30,000 time series. You want to avoid high-cardinality metrics.

As the cardinality of a metric increases, the metric can get throttled and some data points might not be written to the metric. Charts that display the metric can be slow to load due to the large number of time series that the chart has to process. You might also incur costs for API calls to query time series data; review Stackdriver Monitoring costs for details.

To avoid creating high cardinality metrics:

  • Check that your label fields and extractor regular expressions match values that have a limited cardinality.

  • Avoid extracting text messages that can change, without bounds, as label values.

  • Avoid extracting numerical values with unbounded cardinality.

  • Only extract values from labels of known cardinality; for instance, status codes with a set of known values.

These two system logs-based metrics can help you measure the effect that adding or removing labels has on the cardinality of your metric:

When you inspect these metrics, you can futher filter your results by metric name. For details, go to Selecting metrics: filtering.

Metric name is invalid

When you create a counter or distribution metric, choose a metric name that is unique among the logs-based metrics in your project.

Metric-name strings must not exceed 100 characters and can include only the following characters:

  • A-Z
  • a-z
  • 0-9
  • The special characters _-.,+!*',()%\/.

    The forward slash character / denotes a hierarchy of pieces within the metric name and cannot be the first character of the name.

Label values are truncated

Values for user-defined labels must not exceed 1,024 bytes.