Using Prometheus

Prometheus is an optional monitoring tool often used with Kubernetes. If you configure Stackdriver Kubernetes Monitoring with Prometheus support, then services that expose metrics in the Prometheus data model can be exported from the cluster and made visible as external metrics in Stackdriver.

This page presents a mechanism for Stackdriver to collect data from Prometheus clients that works with Stackdriver Kubernetes Monitoring. The source code for the integration is publicly available.

Before you begin

  • You must be an Owner of the project containing your cluster. For more information on Owner privileges, see IAM Role Types.

  • You must have already installed Stackdriver Kubernetes Monitoring in your cluster. For instructions, see Installing Stackdriver Kubernetes Monitoring.

  • You must be running a compatible Prometheus Server and have configured it to monitor the applications in your cluster. See the compatibility matrix.

The Prometheus support described here does not work with the legacy Stackdriver support that is described in Monitoring.

Configuration

Stackdriver provides a collector that needs to be deployed as a sidecar in the same Kubernetes pod as a working Prometheus server.

Use the following shell commands to install the Stackdriver collector in a new cluster using Stackdriver Kubernetes Monitoring. If you want to learn more on how to install Prometheus on your cluster, read the Prometheus Getting Started guide. For more information on how to add the sidecar to an existing Prometheus Server configuration, see the following section on Customization.

Log into your cluster the run the following script with the required environment variables:

  • KUBE_NAMESPACE: namespace to run the script against
  • KUBE_CLUSTER: cluster name parameter for the sidecar
  • GCP_REGION: GCP region parameter for the sidecar
  • GCP_PROJECT: GCP project parameter for the sidecar
  • DATA_DIR: data directory for the sidecar
  • DATA_VOLUME: name of the volume that contains Prometheus's data
  • SIDECAR_IMAGE_TAG: Docker image version for the sidecar. We recommend using the latest release from the Container Registry.
#!/bin/sh

set -e
set -u

usage() {
  echo -e "Usage: $0 <deployment|statefulset> <name>\n"
}

if [  $# -le 1 ]; then
  usage
  exit 1
fi

kubectl -n "${KUBE_NAMESPACE}" patch "$1" "$2" --type strategic --patch "
spec:
  template:
    spec:
      containers:
      - name: sidecar
        image: gcr.io/stackdriver-prometheus/stackdriver-prometheus-sidecar:${SIDECAR_IMAGE_TAG}
        imagePullPolicy: Always
        args:
        - \"--stackdriver.project-id=${GCP_PROJECT}\"
        - \"--prometheus.wal-directory=${DATA_DIR}/wal\"
        - \"--stackdriver.kubernetes.location=${GCP_REGION}\"
        - \"--stackdriver.kubernetes.cluster-name=${KUBE_CLUSTER}\"
        #- \"--stackdriver.generic.location=${GCP_REGION}\"
        #- \"--stackdriver.generic.namespace=${KUBE_CLUSTER}\"
        ports:
        - name: sidecar
          containerPort: 9091
        volumeMounts:
        - name: ${DATA_VOLUME}
          mountPath: ${DATA_DIR}
"

Validating the configuration

After configuring Prometheus, run the following command to validate the installation:

kubectl get deployment,service -n stackdriver

The output of this command will show that the prometheus Deployment is available and the Service is deployed:

NAME                                    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/prometheus-meta   1         1         1            0           28s

NAME                      TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                         AGE
service/prometheus-meta   LoadBalancer   10.51.240.103   <pending>     9090:32387/TCP,9091:30182/TCP   28s

The Prometheus software you installed is pre-configured to begin exporting metrics to Monitoring as external metrics. You can see them in Stackdriver > Resources > Metrics Explorer:

GO TO METRICS EXPLORER

Look in the monitored resource type Kubernetes Container (k8s_container) for metrics named external/prometheus/.... A metric that should have some interesting data is external/prometheus/go_memstats_alloc_bytes. If you have more than one cluster in your Workspace, then you might want to filter the chart on the cluster name, as shown in the following screenshot:

Prometheus chart

Customization

If you want Stackdriver to collect data from an existing Prometheus Server deployment, you can run the Stackdriver collector as a sidecar in the same pod as the Prometheus Server:

  1. Make sure Prometheus Server is writing to a shared volume:

    • Ensure that there is a shared volume in the Prometheus pod:

      volumes:
        - name: data-volume
          emptyDir: {}
      
    • Have Prometheus mount the volume under /data:

      volumeMounts:
      - name: data-volume
        mountPath: /data
      
    • Instruct Prometheus Server to write to the shared volume in /data. Add the following to its container args:

      --storage.tsdb.path=/data
      
  2. Add the collector container as a sidecar.

    - name: sidecar
      image: gcr.io/prometheus-to-sd/stackdriver-prometheus-sidecar:[SIDECAR_IMAGE_TAG]
      args:
      - "--stackdriver.project-id=[PROJECT_ID]"
      - "--prometheus.wal-directory=/data/wal"
      - "--stackdriver.kubernetes.location=[CLUSTER_LOCATION]"
      - "--stackdriver.kubernetes.cluster-name=[CLUSTER_NAME]"
      ports:
      - name: sidecar
        containerPort: 9091
      volumeMounts:
      - name: data-volume
        mountPath: /data
    

For additional information on how to configure the collector, refer to the Stackdriver Prometheus sidecar documentation.

Prometheus integration issues

I'm sure I saw Prometheus metrics types before, but now I can't find them!

The Prometheus software you installed is pre-configured to export metrics to Stackdriver Monitoring as external metrics. When data is exported, Monitoring creates the appropriate metric descriptor for the external metric. If no data of that metric type is subsequently written for at least 6 weeks, the metric descriptor is subject to deletion.

There is no guarantee that unused metric descriptors will be deleted after 6 weeks, but Monitoring reserves the right to delete any Prometheus metric descriptor that hasn't been used in the previous 6 weeks.

My metrics are missing the job and instance Prometheus labels.

The Prometheus job and instance labels might appear in the Stackdriver monitored resource associated with the metric data under other names. If you need to change this, look for the write_relabel_config section in the default configuration.

The up metric has no data points for the times the endpoint is not up.

This is a deviation from the typical Prometheus behavior. If you rely on this metric for alerting, you can use the metric absence alerting condition in your Stackdriver alert policy.

We made this change to avoid other issues described in more detail in the duplicate time series issue.

I modified your default configuration and things stopped working.

The Stackdriver Prometheus collector constructs a Stackdriver MonitoredResource for your Kubernetes objects from well-known Prometheus labels. If you accidentally change the label descriptors, the collector isn't able to write the metrics to Stackdriver.

I see "duplicate time series" or "out-of-order writes" errors in the logs.

These errors can be caused by writing metric data twice to the same time series. They can occur if your Prometheus endpoints expose the same metric data—the same set of metric label values—twice from a single Stackdriver monitored resource.

For example, a Kubernetes container might expose Prometheus metrics on multiple ports. Since the Stackdriver k8s_container monitored resource doesn't differentiate resources based on port, Stackdriver detects you are writing two points to the same time series. A workaround is to add a metric label in Prometheus that differentiates the time series. For example, you might use label __meta_kubernetes_pod_annotation_prometheus_io_port, because it should remain constant across container restarts.

I see "metric kind must be X, but is Y" errors in the logs.

These errors can be caused by changing the Prometheus metric type in the source code between gauge, counter, and others. Stackdriver metrics are strictly typed and Stackdriver strictly enforces this, because the data semantics vary with the type.

If you want to change a metric's type you have to delete the corresponding metric descriptors, which will make its existing time series data inaccessible.

Was this page helpful? Let us know how we did:

Send feedback about...

Stackdriver Monitoring
Need help? Visit our support page.