Using Prometheus

Prometheus is an optional monitoring tool often used with Kubernetes. If you configure Stackdriver Kubernetes Monitoring with Prometheus support, then services that expose metrics in the Prometheus data model can be exported from the cluster and made visible as external metrics in Stackdriver.

This page presents a basic configuration for Prometheus that works with Stackdriver Kubernetes Monitoring.

Before you begin

Configuration

Use the following instructions to install the basic Prometheus configuration in a new cluster or a cluster previously configured with these instructions. If you want to customize your own configuration, see the following section on Customization.

CONSOLE

You cannot use the Kubernetes Engine console to configure Prometheus.

GCLOUD

  1. Log into your cluster.
  2. Download the Kubernetes auth configuration (YAML):

    curl -sSO "https://storage.googleapis.com/stackdriver-prometheus-documentation/rbac-setup.yml"
    
  3. Have a cluster admin (which could be yourself) run the following to set up a Kubernetes service account (named "prometheus") for the collector:

    kubectl apply -f rbac-setup.yml --as=admin --as-group=system:masters
    
  4. Download the default basic Prometheus configuration (YAML):

    curl -sSO "https://storage.googleapis.com/stackdriver-prometheus-documentation/prometheus-service.yml"
    
  5. Look for the following labels in that file, and modify their values to identify your cluster:

    • _stackdriver_project_id: [PROJECT_ID]
    • _kubernetes_cluster_name: [CLUSTER_NAME]
    • _kubernetes_location: [CLUSTER_LOCATION]
  6. Run the following to start the server using your modified configuration:

    kubectl apply -f prometheus-service.yml
    

Validating the configuration

After configuring Prometheus, you should see the software in your cluster:

  • Namespace: stackdriver
  • Deployment: prometheus
  • Service: prometheus
  • Number of pods: 1
  • Pod name: prometheus-...

The Prometheus software you installed is pre-configured to begin exporting metrics to Stackdriver Monitoring as external metrics. You can see them in Stackdriver > Resources > Metrics Explorer:

GO TO METRICS EXPLORER

Look in the monitored resource type Kubernetes Container (k8s_container) for metrics named external/prometheus/.... A metric that should have some interesting data is external/prometheus/go-memstats-alloc-bytes. If you have more than one cluster in your Stackdriver account, then you might want to filter the chart on the cluster name, as shown in the following screenshot:

Prometheus chart

Customization

If you have an existing Prometheus configuration, you can use it with the following changes for Stackdriver Kubernetes Monitoring:

  1. Copy the following three stanzas from the provided basic configuration (YAML) to your own configuration:

    • image
    • external_labels
    • remote_write
  2. Follow the instructions in the preceding Configuration section, substituting your own configuration for the basic configuration.

Annotations

You can annotate your pods before or after you configure Prometheus.

The basic configuration assumes that the pods you want to monitor use the following annotation:

prometheus.io/scrape:'true'

This and other annotations are documented in the configuration file prometheus-service.yml.

Prometheus integration issues

My metrics are missing the job and instance Prometheus labels.

The Prometheus job and instance labels might appear in the Stackdriver monitored resource associated with the metric data under other names. If you need to change this, look for the write_relabel_config section in the default configuration.

The up metric has no data points for the times the endpoint is not up.

This is a deviation from the typical Prometheus behavior. If you rely on this metric for alerting, you can use the metric absence alerting condition in your Stackdriver alert policy.

We made this change to avoid other issues described in more detail in the duplicate time series issue.

I modified your default configuration and things stopped working.

The Stackdriver Prometheus collector constructs a Stackdriver MonitoredResource for your Kubernetes objects from well-known Prometheus labels. If you accidentally change the label descriptors, the collector isn't able to write the metrics to Stackdriver.

I see "duplicate time series" or "out-of-order writes" errors in the logs.

These errors can be caused by writing metric data twice to the same time series. They can occur if your Prometheus endpoints expose the same metric data—the same set of metric label values—twice from a single Stackdriver monitored resource.

For example, a Kubernetes container might expose Prometheus metrics on multiple ports. Since the Stackdriver k8s_container monitored resource doesn't differentiate resources based on port, Stackdriver detects you are writing two points to the same time series. A workaround is to add a metric label in Prometheus that differentiates the time series. For example, you might use label __meta_kubernetes_pod_annotation_prometheus_io_port, because it should remain constant across container restarts.

Send feedback about...

Stackdriver Monitoring