Custom metrics with OpenCensus

Cloud Monitoring automatically collects more than 1,500 built-in metrics from more than 100 monitored resources. But those metrics cannot capture application-specific data or client-side system data. Those metrics can give you information on backend latency or disk usage, but they can't tell you how many background routines your application spawned.

Application-specific metrics are metrics that you define and collect to capture information the built-in Cloud Monitoring metrics cannot. You capture such metrics by using an API provided by a library to instrument your code, and then you send the metrics to a backend application like Cloud Monitoring.

In Cloud Monitoring, application-specific metrics are typically called “custom metrics”. The terms are interchangeable. They are also called “user-defined metrics”.

As far as Cloud Monitoring is concerned, custom metrics can be used like the built-in metrics. You can chart them, set alerts on them, and otherwise monitor them. The difference is that you define the metrics, write data to them, and can delete them. You can't do any of that with the built-in metrics.

There are many ways to capture custom metrics, including using the Cloud Monitoring API directly. Cloud Monitoring recommends that you use OpenCensus to instrument your code for collecting custom metrics.

What is OpenCensus?

OpenCensus is a free, open-source project whose libraries:

  • Provide vendor-neutral support for the collection of metric and trace data across a variety of languages.
  • Can export the collected data to a variety of backend applications, including Cloud Monitoring.

For the current list of supported languages, see Language Support. For the current list of backend applications for which exporters are available, see Exporters.

Why OpenCensus?

Although Cloud Monitoring provides an API that supports defining and collecting custom metrics, it is a low-level, proprietary API. OpenCensus provides a much more idiomatic API, along with an exporter that sends your metric data to Cloud Monitoring through the Monitoring API for you.

Additionally, OpenCensus is an open-source project. You can export the collected data using a vendor-neutral library rather than a proprietary library.

OpenCensus also has good support for application tracing; see OpenCensus Tracing for a general overview. Cloud Trace recommends using OpenCensus for trace instrumentation. You can use a single distribution of libraries to collect both metric and trace data from your services. For information about using OpenCensus with Cloud Trace, see Client Libraries for Trace.

Before you begin

To use Cloud Monitoring, you must have a Cloud project with billing enabled.

If you don't have a Cloud project, do the following::

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud Console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  3. Make sure that billing is enabled for your Cloud project. Learn how to confirm that billing is enabled for your project.

  4. Enable the Monitoring API. For details, see Enabling the Monitoring API.
  5. If your application is running outside of Google Cloud, then your application needs to be authenticated by your Google Cloud project. For details, see Getting started with authentication.

Installing OpenCensus

To use OpenCensus, you must make the metrics libraries, and the Stackdriver exporter, available.


Using OpenCensus requires Go version 1.11 or higher. The dependencies are handled automatically for you.


For Maven, add the following to the dependencies element in your pom.xml file:


  1. Before installing the OpenCensus core and exporter libraries, make sure you've prepared your environment for Node.js development.
  2. The easiest way to install OpenCensus is with npm:
    npm install @opencensus/core
    npm install @opencensus/exporter-stackdriver
  3. Place the require statements shown below at the top of your application's main script or entry point, before any other code:
const {globalStats, MeasureUnit, AggregationType} = require('@opencensus/core');
const {StackdriverStatsExporter} = require('@opencensus/exporter-stackdriver');


Install the OpenCensus core and Stackdriver exporter libraries by using the following command:

pip install -r opencensus/requirements.txt

The requirements.txt file is in the GitHub repository for these samples, python-docs-samples.

Using OpenCensus for metrics

Instrumenting your code to use OpenCensus for metrics involves three general steps:

  1. Importing the OpenCensus stats and OpenCensus Stackdriver exporter packages.
  2. Initializing the Stackdriver exporter.
  3. Using the OpenCensus API to instrument your code.

A basic example

Following is a minimal program that illustrates these steps. It runs a loop and collects latency measures, and when the loop finishes, it exports the stats to Cloud Monitoring and exits:


// metrics_quickstart is an example of exporting a custom metric from
// OpenCensus to Stackdriver.
package main

import (


var (
	// The task latency in milliseconds.
	latencyMs = stats.Float64("task_latency", "The task latency in milliseconds", "ms")

func main() {
	ctx := context.Background()

	// Register the view. It is imperative that this step exists,
	// otherwise recorded metrics will be dropped and never exported.
	v := &view.View{
		Name:        "task_latency_distribution",
		Measure:     latencyMs,
		Description: "The distribution of the task latencies",

		// Latency in buckets:
		// [>=0ms, >=100ms, >=200ms, >=400ms, >=1s, >=2s, >=4s]
		Aggregation: view.Distribution(0, 100, 200, 400, 1000, 2000, 4000),
	if err := view.Register(v); err != nil {
		log.Fatalf("Failed to register the view: %v", err)

	// Enable OpenCensus exporters to export metrics
	// to Stackdriver Monitoring.
	// Exporters use Application Default Credentials to authenticate.
	// See
	// for more details.
	exporter, err := stackdriver.NewExporter(stackdriver.Options{})
	if err != nil {
	// Flush must be called before main() exits to ensure metrics are recorded.
	defer exporter.Flush()

	if err := exporter.StartMetricsExporter(); err != nil {
		log.Fatalf("Error starting metric exporter: %v", err)
	defer exporter.StopMetricsExporter()

	// Record 100 fake latency values between 0 and 5 seconds.
	for i := 0; i < 100; i++ {
		ms := float64(5*time.Second/time.Millisecond) * rand.Float64()
		fmt.Printf("Latency %d: %f\n", i, ms)
		stats.Record(ctx, latencyMs.M(ms))
		time.Sleep(1 * time.Second)

	fmt.Println("Done recording metrics")


import io.opencensus.exporter.stats.stackdriver.StackdriverStatsExporter;
import io.opencensus.stats.Aggregation;
import io.opencensus.stats.BucketBoundaries;
import io.opencensus.stats.Measure.MeasureLong;
import io.opencensus.stats.Stats;
import io.opencensus.stats.StatsRecorder;
import io.opencensus.stats.View;
import io.opencensus.stats.View.Name;
import io.opencensus.stats.ViewManager;
import java.util.Collections;
import java.util.Random;
import java.util.concurrent.TimeUnit;

public class Quickstart {
  private static final int EXPORT_INTERVAL = 70;
  private static final MeasureLong LATENCY_MS =
      MeasureLong.create("task_latency", "The task latency in milliseconds", "ms");
  // Latency in buckets:
  // [>=0ms, >=100ms, >=200ms, >=400ms, >=1s, >=2s, >=4s]
  private static final BucketBoundaries LATENCY_BOUNDARIES =
      BucketBoundaries.create(Lists.newArrayList(0d, 100d, 200d, 400d, 1000d, 2000d, 4000d));
  private static final StatsRecorder STATS_RECORDER = Stats.getStatsRecorder();

  public static void main(String[] args) throws IOException, InterruptedException {
    // Register the view. It is imperative that this step exists,
    // otherwise recorded metrics will be dropped and never exported.
    View view =
            "The distribution of the task latencies.",

    ViewManager viewManager = Stats.getViewManager();

    // Enable OpenCensus exporters to export metrics to Stackdriver Monitoring.
    // Exporters use Application Default Credentials to authenticate.
    // See
    // for more details.

    // Record 100 fake latency values between 0 and 5 seconds.
    Random rand = new Random();
    for (int i = 0; i < 100; i++) {
      long ms = (long) (TimeUnit.MILLISECONDS.convert(5, TimeUnit.SECONDS) * rand.nextDouble());
      System.out.println(String.format("Latency %d: %d", i, ms));
      STATS_RECORDER.newMeasureMap().put(LATENCY_MS, ms).record();

    // The default export interval is 60 seconds. The thread with the StackdriverStatsExporter must
    // live for at least the interval past any metrics that must be collected, or some risk being
    // lost if they are recorded after the last export.

            "Sleeping %d seconds before shutdown to ensure all records are flushed.",
    Thread.sleep(TimeUnit.MILLISECONDS.convert(EXPORT_INTERVAL, TimeUnit.SECONDS));


'use strict';

const {globalStats, MeasureUnit, AggregationType} = require('@opencensus/core');
const {StackdriverStatsExporter} = require('@opencensus/exporter-stackdriver');

const EXPORT_INTERVAL = process.env.EXPORT_INTERVAL || 60;
const LATENCY_MS = globalStats.createMeasureInt64(
  'The task latency in milliseconds'

// Register the view. It is imperative that this step exists,
// otherwise recorded metrics will be dropped and never exported.
const view = globalStats.createView(
  'The distribution of the task latencies.',
  // Latency in buckets:
  // [>=0ms, >=100ms, >=200ms, >=400ms, >=1s, >=2s, >=4s]
  [0, 100, 200, 400, 1000, 2000, 4000]

// Then finally register the views

// Enable OpenCensus exporters to export metrics to Stackdriver Monitoring.
// Exporters use Application Default Credentials (ADCs) to authenticate.
// See
// for more details.
// Expects ADCs to be provided through the environment as ${GOOGLE_APPLICATION_CREDENTIALS}
// A Stackdriver workspace is required and provided through the environment as ${GOOGLE_PROJECT_ID}
const projectId = process.env.GOOGLE_PROJECT_ID;

// GOOGLE_APPLICATION_CREDENTIALS are expected by a dependency of this code
// Not this code itself. Checking for existence here but not retaining (as not needed)
if (!projectId || !process.env.GOOGLE_APPLICATION_CREDENTIALS) {
  throw Error('Unable to proceed without a Project ID');

// The minimum reporting period for Stackdriver is 1 minute.
const exporter = new StackdriverStatsExporter({
  projectId: projectId,
  period: EXPORT_INTERVAL * 1000,

// Pass the created exporter to Stats

// Record 100 fake latency values between 0 and 5 seconds.
for (let i = 0; i < 100; i++) {
  const ms = Math.floor(Math.random() * 5);
  console.log(`Latency ${i}: ${ms}`);
      measure: LATENCY_MS,
      value: ms,

 * The default export interval is 60 seconds. The thread with the
 * StackdriverStatsExporter must live for at least the interval past any
 * metrics that must be collected, or some risk being lost if they are recorded
 * after the last export.
setTimeout(() => {
  console.log('Done recording metrics.');


from random import random
import time

from opencensus.ext.stackdriver import stats_exporter
from opencensus.stats import aggregation
from opencensus.stats import measure
from opencensus.stats import stats
from opencensus.stats import view

# A measure that represents task latency in ms.
LATENCY_MS = measure.MeasureFloat(
    "The task latency in milliseconds",

# A view of the task latency measure that aggregates measurements according to
# a histogram with predefined bucket boundaries. This aggregate is periodically
# exported to Stackdriver Monitoring.
LATENCY_VIEW = view.View(
    "The distribution of the task latencies",
    # Latency in buckets: [>=0ms, >=100ms, >=200ms, >=400ms, >=1s, >=2s, >=4s]
        [100.0, 200.0, 400.0, 1000.0, 2000.0, 4000.0]))

def main():
    # Register the view. Measurements are only aggregated and exported if
    # they're associated with a registered view.

    # Create the Stackdriver stats exporter and start exporting metrics in the
    # background, once every 60 seconds by default.
    exporter = stats_exporter.new_stats_exporter()
    print('Exporting stats to project "{}"'

    # Register exporter to the view manager.

    # Record 100 fake latency values between 0 and 5 seconds.
    for num in range(100):
        ms = random() * 5 * 1000

        mmap = stats.stats.stats_recorder.new_measurement_map()
        mmap.measure_float_put(LATENCY_MS, ms)

        print("Fake latency recorded ({}: {})".format(num, ms))

    # Keep the thread alive long enough for the exporter to export at least
    # once.

if __name__ == '__main__':
When this metric data is exported to Cloud Monitoring, you can use it like any other data.

The program creates an OpenCensus view called task_latency_distribution. This string becomes part of the name of the metric when it is exported to Cloud Monitoring. See Retrieving metric descriptors to see how the OpenCensus view is realized as a Cloud Monitoring metric descriptor.

You can therefore use the view name as a search string when selecting a metric to chart. For example, you can type it into the Metric field in Metrics Explorer. The following screenshot shows the result:

Metrics from OpenCensus in Cloud Monitoring.

Each bar in the heatmap represents one run of the program, and the colored components of each bar represent buckets in the latency distribution. See OpenCensus metrics in Cloud Monitoring for more details about the data behind the chart.

OpenCensus documentation

OpenCensus provides the authoritative reference documentation for its metrics API and for the Stackdriver exporter. The following table provides links to these reference documents:

Language API Reference Documentation Exporter Documentation Quickstart
Go Go API Stats and Trace Exporters Metrics
Java Java API Stats Exporter Metrics
NodeJS NodeJS API Stats Exporter Metrics
Python Python API Stats Exporter Metrics

Mapping the models

Direct use of the Cloud Monitoring API for custom metrics is supported; using it is described in Using Custom Metrics. In fact, the OpenCensus exporter for Cloud Monitoring uses this API for you.

Even if you don't need to know the specifics of using the Cloud Monitoring API, familiarity with its constructs and terminology is useful for understanding how Cloud Monitoring represents the metrics. This section provides some of that background.

Once your metrics are ingested into Cloud Monitoring, they are stored within Cloud Monitoring constructs. You can, for example, retrieve the metric descriptor — a type from the Monitoring API — of a custom metric. See MetricDescriptor for more information. You encounter these metric descriptors, for example, when creating charts for your data.

Terminology and concepts

The constructs used by the OpenCensus API differ from those used by Cloud Monitoring, as does some use of terminology. Where Cloud Monitoring refers to “metrics”, OpenCensus sometimes refers to “stats”. For example, the component of OpenCensus that sends metric data to Cloud Monitoring is called the “stats exporter for Stackdrdiver”.

See OpenCensus Metrics for an overview of the OpenCensus model for metrics.

The data models for OpenCensus stats and Cloud Monitoring metrics do not fall into a neat 1:1 mapping. Many of the same concepts exist in each, but they are not directly interchangeable.

  • An OpenCensus view is generally analogous to the MetricDescriptor in the Monitoring API. A view describes how to collect and aggregate individual measurements. All recorded measurements are broken down by tags.

  • An OpenCensus tag is a key-value pair. This corresponds generally to the LabelDescriptor in the Monitoring API. Tags allow you to capture contextual information that can be used to filter and group metrics

  • An OpenCensus measure describes metric data to be recorded. An OpenCensus aggregation is a function applied to data used to summarize it. These are used in exporting to determine the MetricKind, ValueType, and unit reported in the Cloud Monitoring metric descriptor.

  • An OpenCensus measurement is a data point collected for measure. Measurements must be aggregated into views. Otherwise, the individual measurements are dropped. This construct is analogous to a Point in the Monitoring API. When measurements are aggregated in views, the aggregated data is stored as view data, analogous to a TimeSeries in the Monitoring API.

OpenCensus metrics in Cloud Monitoring

You can examine the exported metrics in Cloud Monitoring. The screenshot in A basic example was taken from Metrics Explorer. If you have run the sample program, you can use Metrics Explorer to look at your data.

To view the metrics for a monitored resource using Metrics Explorer, do the following:

  1. In the Google Cloud Console, go to Monitoring or use the following button:
    Go to Monitoring
  2. In the Monitoring navigation pane, click Metrics Explorer.
  3. Select the Configuration tab, and then enter or select a Resource type and a Metric.

You can supply the name of the OpenCensus view when specifying the metric to restrict the search. See Selecting metrics for more information.

Retrieving metric descriptors

You can retrieve the metric data using the Monitoring API directly. To retrieve the metric data, you need to know the Cloud Monitoring names to which the OpenCensus metrics were exported.

One way to get this information is to retrieve the metric descriptors that were created by the exporter and find the value of the type field. This value incorporates the name of the OpenCensus view from which it was exported. For details on metric descriptors, see MetricDescriptor.

You can see the metric descriptors created for the exported metrics by using the API Explorer (Try this API) widget on the reference page for the metricDescriptors.list method. To retrieve the metrics descriptors for the OpenCensus metrics using this tool:

  1. Enter the name of your project in the name field: projects/[PROJECT_ID] This document uses a project with the ID a-gcp-project.

  2. Enter a filter in the filter field. The name of the OpenCensus view becomes part of metric name, so you can use that name to restrict the listing by providing a filter like this:


    There are a lot of metric descriptors in any project. Filtering on a substring from the OpenCensus view's name eliminates most of them.

  3. Click the Execute button.

The following shows the returned metric descriptor:

      "metricDescriptors": [
          "name": "projects/a-gcp-project/metricDescriptors/",
          "labels": [
              "key": "opencensus_task",
              "description": "Opencensus task identifier"
          "metricKind": "CUMULATIVE",
          "valueType": "DISTRIBUTION",
          "unit": "ms",
          "description": "The distribution of the task latencies",
          "displayName": "OpenCensus/task_latency_distribution",
          "type": ""

This line in the metric descriptor tells you the name of the metric type in Cloud Monitoring:

    "type": ""

With this information, you can then manually retrieve the data associated with this metric type. This is also the data that appears on a chart for this metric.

Retrieving metric data

To manually retrieve time-series data from a metric type, you can use the Try this API tool on the reference page for the timeSeries.list method:

  1. Enter the name of your project in the name field: projects/[PROJECT_ID]
  2. Enter a filter in the filter field for the desired metric type: metric.type=""
    • The key, metric.type, is a field in a type embedded in a timeseries. See TimeSeries for details.
    • The value is the type value extracted from the metric descriptor in Retrieving metric descriptors.
  3. Enter time boundaries for the retrieval by specifying values for these fields:
    • interval.endTime as a timestamp, for example: 2018-10-11T15:48:38-04:00
    • interval.startTime (must be earlier than interval.endTime)
  4. Click the Execute button.

The following shows the result of one such retrieval:

      "timeSeries": [
          "metric": {
            "labels": {
              "opencensus_task": "java-3424@docbuild"
            "type": ""
          "resource": {
            "type": "gce_instance",
            "labels": {
              "instance_id": "2455918024984027105",
              "zone": "us-east1-b",
              "project_id": "a-gcp-project"
          "metricKind": "CUMULATIVE",
          "valueType": "DISTRIBUTION",
          "points": [
              "interval": {
                "startTime": "2019-04-04T17:49:34.163Z",
                "endTime": "2019-04-04T17:50:42.917Z"
              "value": {
                "distributionValue": {
                  "count": "100",
                  "mean": 2610.11,
                  "sumOfSquaredDeviation": 206029821.78999996,
                  "bucketOptions": {
                    "explicitBuckets": {
                      "bounds": [
                  "bucketCounts": [
        [ ... data from additional program runs deleted ...]

This data returned here includes:

  • Information about the monitored resource on which the data was collected. OpenCensus can automatically detect gce_instance, k8s_container, and aws_ec2_instance monitored resources. This data came from a program run on a Compute Engine instance. For information on using other monitored resources, see Set monitored resource for exporter.
  • Description of the kind of metric and the type of the values.
  • The actual data points collected within the time interval requested.