Get started with managed collection

This document describes how to set up Google Cloud Managed Service for Prometheus with managed collection. The setup is a minimal example of working ingestion, using a Prometheus deployment that monitors an example application and stores collected metrics in Monarch.

This document shows you how to do the following:

  • Set up your environment and command-line tools.
  • Set up managed collection for your cluster.
  • Configure a resource for target scraping and metric ingestion.
  • Migrate existing prometheus-operator custom resources.

We recommend that you use managed collection; it reduces the complexity of deploying, scaling, sharding, configuring, and maintaining the collectors. Managed collection is supported for both GKE and any other Kubernetes environments. For more information about managed and self-deployed data collection, see Data collection with Managed Service for Prometheus.

Before you begin

This section describes the configuration needed for the tasks described in this document.

Set up projects and tools

To use Google Cloud Managed Service for Prometheus, you need the following resources:

  • A Google Cloud project with the Cloud Monitoring API enabled.

    • If you don't have a Cloud project, then do the following:

      1. In the Cloud Console, go to New Project:

        Create a New Project

      2. In the Project Name field, enter a name for your project and then click Create.

      3. Go to Billing:

        Go to Billing

      4. Select the project you just created if it isn't already selected at the top of the page.

      5. You are prompted to choose an existing payments profile or to create a new one.

      The Monitoring API is enabled by default for new projects.

    • If you already have a Cloud project, then ensure that the Monitoring API is enabled:

      1. Go to APIs & services:

        Go to APIs & services

      2. Select your project.

      3. Click Enable APIs and Services.

      4. Search for "Monitoring".

      5. In the search results, click through to "Stackdriver Monitoring API".

      6. If "API enabled" is not displayed, then click the Enable button.

  • A Kubernetes cluster. If you do not have a Kubernetes cluster, then follow the instructions in the Quickstart for GKE.

You also need the following command-line tools:

  • gcloud
  • kubectl

The gcloud and kubectl tools are part of the Cloud SDK. For information about installing them, see Managing Cloud SDK components. To see the Cloud SDK components you have installed, run the following command:

gcloud components list

Configure your environment

To avoid repeatedly entering your project ID or cluster name, perform the following configuration:

  • Configure the command-line tools as follows:

    • Configure the gcloud tool to refer to the ID of your Cloud project:

      gcloud config set project PROJECT_ID
      
    • Configure the kubectl tool to use your cluster:

      kubectl config set-cluster CLUSTER_NAME
      

    For more information about these tools, see the following:

Set up a namespace

Create the gmp-test Kubernetes namespace for resources you create as part of the example application:

kubectl create ns gmp-test

Set up managed collection

To download and deploy managed collection to your cluster, apply the service's setup and operator manifest by running the following kubectl commands:

kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/prometheus-engine/v0.1.1/examples/setup.yaml

kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/prometheus-engine/v0.1.1/examples/operator.yaml

After applying the manifests, managed collection will be running but no metrics are generated yet.

Deploy the example application

The managed service provides a manifest for an example application that emits Prometheus metrics on its metrics port. The application uses three replicas.

To deploy the example application, run the following command:

kubectl -n gmp-test apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/prometheus-engine/v0.1.1/examples/example-app.yaml

Configure a PodMonitoring resource

To ingest the metric data emitted by the example application, you use target scraping. Target scraping and metrics ingestion are configured using Kubernetes custom resources. The managed service uses PodMonitoring custom resources (CRs).

A PodMonitoring CR scrapes targets only in the namespace the CR is deployed in. To scrape targets in multiple namespaces, deploy the same PodMonitoring CR in each namespace. You can verify the PodMonitoring resource is installed in the intended namespace by running kubectl get podmonitoring -A.

For reference documentation about all the Managed Service for Prometheus CRs, see the prometheus-engine/doc/api reference.

The following manifest defines a PodMonitoring resource, prom-example, in the gmp-test namespace. The resource uses a Kubernetes label selector to find all pods in the namespace that have the label app with the value prom-example. The matching pods are scraped on a port named metrics, every 30 seconds, on the /metrics HTTP path.

apiVersion: monitoring.googleapis.com/v1alpha1
kind: PodMonitoring
metadata:
  name: prom-example
spec:
  selector:
    matchLabels:
      app: prom-example
  endpoints:
  - port: metrics
    interval: 30s

To apply this resource, run the following command:

kubectl -n gmp-test apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/prometheus-engine/v0.1.1/examples/pod-monitoring.yaml

Your managed collector is now scraping the matching pods.

If you are running on GKE, then you can do the following:

If you are running outside of GKE, then you need to create a service account and authorize it to write your metric data, as described in the following section.

Provide credentials explicitly

When running on GKE, the collecting Prometheus server automatically retrieves credentials from the environment based on the Compute Engine default service account or the Workload Identity setup.

In non-GKE Kubernetes clusters, credentials must be explicitly provided through the OperatorConfig resource in the gmp-public namespace.

  1. Create a service account:

    gcloud iam service-accounts create gmp-test-sa
    

  2. Grant the required permissions to the service account:

    gcloud projects add-iam-policy-binding PROJECT_ID\
      --member=serviceAccount:gmp-test-sa@PROJECT_ID.iam.gserviceaccount.com \
      --role=roles/monitoring.metricWriter
    

  3. Create and download a key for the service account:

    gcloud iam service-accounts keys create gmp-test-sa-key.json \
      --iam-account=gmp-test-sa@PROJECT_ID.iam.gserviceaccount.com
    
  4. Add the key file as a secret to your non-GKE cluster:

    kubectl -n gmp-public create secret generic gmp-test-sa \
      --from-file=key.json=gmp-test-sa-key.json
    

  5. Open the OperatorConfig resource for editing:

    kubectl -n gmp-public edit operatorconfig config
    

  6. Add the text shown in bold to the resource:

    apiVersion: monitoring.googleapis.com/v1alpha1
    kind: OperatorConfig
    metadata:
      namespace: gmp-public
      name: config
    collection:
      credentials:
        name: gmp-test-sa
        key: key.json
    

  7. Save the file and close the editor. After the change is applied, the pods are re-created and start authenticating to the metric backend with the given service account.

Additional topics for managed collection

This section describes how to do the following:

  • Filter the data you export to the managed service.
  • Convert your existing prom-operator resources for use with the managed service.

Filter exported metrics

If you collect a lot of data, you might want to prevent some time series from being sent to Managed Service for Prometheus to keep costs down.

To filter exported metrics, you can configure a set of PromQL series selectors in the OperatorConfig resource. A time series is exported to Managed Service for Prometheus if it satisfies at least one of the selectors.

  1. Open the OperatorConfig resource for editing:

    kubectl -n gmp-public edit operatorconfig config
    
  2. Add the following collection filter, shown in bold type, to the resource:

    apiVersion: monitoring.googleapis.com/v1alpha1
    kind: OperatorConfig
    metadata:
      namespace: gmp-public
      name: config
    collection:
      filter:
        matchOneOf:
        - '{job="prometheus"}'
        - '{__name__=~"job:.+"}'
    
  3. Save the file and close the editor.

This addition causes only metrics for the "prometheus" job as well as metrics produced by recording rules that aggregate to the job level—when following naming best practices—to be exported. Samples for all other time series are filtered out. By default, no selectors are specified and all time series are exported.

The filter.matchOneOf configuration section has the same semantics as the match[] parameters for Prometheus federation.

Convert existing prometheus-operator resources

In most cases, you can convert your existing prometheus-operator resources to Managed Service for Prometheus managed-collector configurations.

For example, the ServiceMonitor resource defines monitoring for a set of services. The PodMonitoring resource serves a subset of the fields served by the ServiceMonitor resource. You can convert a ServiceMonitor CR to a PodMonitoring CR by mapping the fields as described in the following table:

monitoring.coreos.com/v1
ServiceMonitor
Compatibility
 
monitoring.googleapis.com/v1alpha1
PodMonitoring
.ServiceMonitorSpec.Selector Identical .PodMonitoringSpec.Selector
.ServiceMonitorSpec.Endpoints[] .TargetPort maps to .Port
.Path: compatible
.Interval: compatible
.Timeout: compatible
.PodMonitoringSpec.Endpoints[]
.ServiceMonitorSpec.TargetLabels PodMonitor must specify:
.FromPod[].From pod label
.FromPod[].To target label
.PodMonitoringSpec.TargetLabels

The following is a sample ServiceMonitor CR; the content in bold type is replaced in the conversion, and the content in italic type maps directly:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: example-app
spec:
  selector:
    matchLabels:
      app: example-app
  endpoints:
  - targetPort: web
    path: /stats
    interval: 30s
  targetLabels:
  - foo

The following is the analogous PodMonitoring CR, assuming that your service and its pods are labeled with app=example-app. If this assumption does not apply, then you need to use the label selectors of the underlying Service resource.

The content in bold type has been replaced in the conversion:

apiVersion: monitoring.googleapis.com/v1alpha1
kind: PodMonitoring
metadata:
  name: example-app
spec:
  selector:
    matchLabels:
      app: example-app
  endpoints:
  - port: web
    path: /stats
    interval: 30s
  targetLabels:
    fromPod:
    - from: foo # pod label from example-app Service pods.
      to: foo

Teardown

To disable managed collection, run the following command:

kubectl delete -f https://raw.githubusercontent.com/GoogleCloudPlatform/prometheus-engine/v0.1.1/examples/operator.yaml

What's next