Configure a query user interface

Stay organized with collections Save and categorize content based on your preferences.

After you have deployed Google Cloud Managed Service for Prometheus, you can query the data sent to the managed service and display the results in charts and dashboards.

This document describes metrics scopes, which determine the data you can query, and the following ways to retrieve and use the data you've collected:

  • Prometheus-based interfaces:

    • Managed Service for Prometheus page in the Google Cloud console
    • Prometheus HTTP API
    • Prometheus UI
    • Grafana
  • Cloud Monitoring in the Google Cloud console

All query interfaces for Managed Service for Prometheus are configured to retrieve data from Monarch using the Cloud Monitoring API. By querying Monarch instead of querying data from local Prometheus servers, you get global monitoring at scale.

Before you begin

If you have not already deployed the managed service, then set up managed collection or self-deployed collection. You can skip this if you're only interested in querying Cloud Monitoring metrics using PromQL.

Configure your environment

To avoid repeatedly entering your project ID or cluster name, perform the following configuration:

  • Configure the command-line tools as follows:

    • Configure the gcloud CLI to refer to the ID of your Cloud project:

      gcloud config set project PROJECT_ID
      
    • Configure the kubectl CLI to use your cluster:

      kubectl config set-cluster CLUSTER_NAME
      

    For more information about these tools, see the following:

Set up a namespace

Create the NAMESPACE_NAME Kubernetes namespace for resources you create as part of the example application:

kubectl create ns NAMESPACE_NAME

Verify service account credentials

You can skip this section if your Kubernetes cluster has Workload Identity enabled.

When running on GKE, Managed Service for Prometheus automatically retrieves credentials from the environment based on the Compute Engine default service account. The default service account has the necessary permissions, monitoring.metricWriter and monitoring.viewer, by default. If you do not use Workload Identity, and you have previously removed either of those roles from the default node service account, you will have to re-add those missing permissions before continuing.

If you are not running on GKE, see Provide credentials explicitly.

Configure a service account for Workload Identity

You can skip this section if your Kubernetes cluster does not have Workload Identity enabled.

Managed Service for Prometheus captures metric data by using the Cloud Monitoring API. If your cluster is using Workload Identity, you must grant your Kubernetes service account permission to the Monitoring API. This section describes the following:

Create and bind the service account

This step appears in several places in the Managed Service for Prometheus documentation. If you have already performed this step as part of a prior task, then you do not need to repeat it. Skip ahead to Authorize the service account.

The following command sequence creates the gmp-test-sa service account and binds it to the default Kubernetes service account in the NAMESPACE_NAME namespace:

gcloud config set project PROJECT_ID \
&&
gcloud iam service-accounts create gmp-test-sa \
&&
gcloud iam service-accounts add-iam-policy-binding \
  --role roles/iam.workloadIdentityUser \
  --member "serviceAccount:PROJECT_ID.svc.id.goog[NAMESPACE_NAME/default]" \
  gmp-test-sa@PROJECT_ID.iam.gserviceaccount.com \
&&
kubectl annotate serviceaccount \
  --namespace NAMESPACE_NAME \
  default \
  iam.gke.io/gcp-service-account=gmp-test-sa@PROJECT_ID.iam.gserviceaccount.com

If you are using a different GKE namespace or service account, adjust the commands appropriately.

Authorize the service account

Groups of related permissions are collected into roles, and you grant the roles to a principal, in this example, the Google Cloud service account. For more information about Monitoring roles, see Access control.

The following command grants the Google Cloud service account, gmp-test-sa, the Monitoring API roles it needs to read metric data.

If you have already granted the Google Cloud service account a specific role as part of prior task, then you don't need to do it again.

To authorize your service account to read from a multi-project metrics scope, follow these instructions and then see Change the queried project.

gcloud projects add-iam-policy-binding PROJECT_ID \
  --member=serviceAccount:gmp-test-sa@PROJECT_ID.iam.gserviceaccount.com \
  --role=roles/monitoring.viewer

Workload Identity in production environments

The example described in this document binds the Google Cloud service account to the default Kubernetes service account and gives Google Cloud service account all necessary permission to use the Monitoring API.

In a production environment, you might want to use a finer-grained approach, with a service account for each component, each with minimal permissions. For more information on configuring service accounts for workload-identity management, see Using Workload Identity.

Queries and metrics scopes

The data you can query is determined by the Cloud Monitoring construct metrics scope, regardless of the method you use to query the data. For example, if you use Grafana to query Managed Service for Prometheus data, then each metrics scope must be configured as a separate data source.

A Monitoring metrics scope is a read-time-only construct that lets you query metric data belonging to multiple Google Cloud projects through a designated Cloud project, called the scoping project, which hosts a metrics scope.

By default, a project is the scoping project for its own metrics scope, and the metrics scope contains the metrics and configuration for that project. A scoping project can have more than one monitored project in its metrics scope, and the metrics and configurations from all the monitored projects in the metrics scope are visible to the scoping project. A monitored project can also belong to more than one metrics scope.

When you query the metrics in a scoping project, and if that scoping project hosts a multi-project metrics scope, you can retrieve data from multiple projects. If your metrics scope contains all your projects, then your queries and rules evaluate globally.

For more information about scoping projects and metrics scope, see Metrics scopes. For information about configuring multi-project metrics scope, see View metrics for multiple projects.

Managed Service for Prometheus page

The simplest way to verify that your Prometheus data is being exported is to use the PromQL-based Managed Service for Prometheus page in the Google Cloud console.

To view this page, do the following:

  1. In the Google Cloud console, go to Monitoring or use the following button:

    Go to Monitoring

  2. In the Monitoring navigation pane, click Managed Prometheus.

On the Managed Service for Prometheus page, you can use PromQL queries to retrieve and chart data collected with the managed service. This page can query only data collected by Managed Service for Prometheus.

The following screenshot shows a chart that displays the up metric:

Managed Service for Prometheus chart for the Prometheus "up" metric.

Prometheus UI

You can use the Prometheus UI to access and visualize ingested data. This UI runs PromQL queries against all data in your Google Cloud project, as determined by the metrics scope associated with your project.

The UI also acts as an authentication proxy for accessing ingested data. This feature can be used for client tools that don't support OAuth2, including Grafana. If you plan to use Grafana to visualize data from Managed Service for Prometheus, then you must deploy the Prometheus UI as well.

Deploy the UI

To deploy the Prometheus UI for Managed Service for Prometheus, run the following commands:

  1. Deploy the frontend service and configure it to query the scoping project of your metrics scope of your choice:

    curl https://raw.githubusercontent.com/GoogleCloudPlatform/prometheus-engine/v0.4.3-gke.0/examples/frontend.yaml |
    sed 's/\$PROJECT_ID/PROJECT_ID/' |
    kubectl apply -n NAMESPACE_NAME -f -
    
  2. Port-forward the frontend service to your local machine. The following example forwards the service to port 9090:

    kubectl -n NAMESPACE_NAME port-forward svc/frontend 9090
    

    This command does not return, and while it is running, it reports accesses to the URL.

If you want to continue using a Grafana deployment installed by kube-prometheus, then deploy the Prometheus UI in the monitoring namespace instead.

You can access the Prometheus UI in your browser at the URL http://localhost:9090. If you are using Cloud Shell for this step, you can get access by using the Web Preview button.

The following screenshot shows a table in the Prometheus UI that displays the up metric:

Viewing a metric in the Prometheus UI

You can also set up suitable authentication and authorization on the frontend service by using, for example, Identity Aware Proxy. For more information on exposing services, see Exposing applications using services.

Provide credentials explicitly

When running on GKE, the frontend automatically retrieves credentials from the environment based on the Compute Engine default service account or the Workload Identity setup.

In non-GKE Kubernetes clusters, credentials must be explicitly provided to the frontend by using flags or the GOOGLE_APPLICATION_CREDENTIALS environment variable.

  1. Set the context to your target project:

    gcloud config set project PROJECT_ID
    
  2. Create a service account:

    gcloud iam service-accounts create gmp-test-sa
    

    This step creates the service account that you might have already created in the Workload Identity instructions.

  3. Grant the required permissions to the service account:

    gcloud projects add-iam-policy-binding PROJECT_ID \
      --member=serviceAccount:gmp-test-sa@PROJECT_ID.iam.gserviceaccount.com \
      --role=roles/monitoring.viewer
    

  4. Create and download a key for the service account:

    gcloud iam service-accounts keys create gmp-test-sa-key.json \
      --iam-account=gmp-test-sa@PROJECT_ID.iam.gserviceaccount.com
    
  5. Add the key file as a secret to your non-GKE cluster:

    kubectl -n NAMESPACE_NAME create secret generic gmp-test-sa \
      --from-file=key.json=gmp-test-sa-key.json
    

  6. Open the frontend Deployment resource for editing:

    kubectl -n NAMESPACE_NAME edit deploy frontend
    
  7. Add the text shown in bold to the resource:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      namespace: NAMESPACE_NAME
      name: frontend
    spec:
      template
        containers:
        - name: frontend
          args:
          - --query.credentials-file=/gmp/key.json
    ...
          volumeMounts:
          - name: gmp-sa
            mountPath: /gmp
            readOnly: true
    ...
        volumes:
        - name: gmp-sa
          secret:
            secretName: gmp-test-sa
    ...
    

  8. Save the file and close the editor. After the change is applied, the pods are re-created and start authenticating to the metric backend with the given service account.

Alternatively, instead of using the flags set in this example, you can set the key-file path by using the GOOGLE_APPLICATION_CREDENTIALS environment variable.

Change the queried project

The frontend deployment uses the cluster's Google Cloud project as the scoping project. If this project is the scoping project of a multi-project metrics scope, then it can read metrics from all projects in the metrics scope.

You can specify a different project by using the --query.project-id flag.

Typically, you use a dedicated project as a scoping project, and this project is not the same project the frontend deployment runs in. To let the deployment read a different target project, you must do the following:

  • Tell the frontend deloyment which project is the target project.
  • Grant the service account permission to read the target project. If you have been using the Compute Engine default service account, you can do one of the following:

To grant the permissions needed to access a different Cloud project, do the following:

  1. Grant the service account permission to read from the target project you want to query:

    gcloud projects add-iam-policy-binding SCOPING_PROJECT_ID \
      --member=serviceAccount:gmp-test-sa@PROJECT_ID.iam.gserviceaccount.com \
      --role=roles/monitoring.viewer
    
  2. Open the frontend deployment created previously for editing:

    kubectl -n NAMESPACE_NAME edit deploy frontend
    
  3. Specify the target project by using the --query.project-id flag:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      namespace: NAMESPACE_NAME
      name: frontend
    spec:
      template
        containers:
        - name: frontend
          args:
          - --query.project-id=SCOPING_PROJECT_ID
    ...
    

    Save the file and close the editor. After the change is applied, the frontend pods are restarted are query the new scoping project.

Grafana

Managed Service for Prometheus uses the built-in Prometheus data source for Grafana, meaning that you can keep using any community-created or personal Grafana dashboards without any changes.

Authenticating to Google Cloud APIs

Google Cloud APIs all require authentication using OAuth2; however, Grafana doesn't support OAuth2 authentication for Prometheus data sources. To use Grafana with Managed Service for Prometheus, you must use the Prometheus UI as an authentication proxy.

If you have not already deployed the Prometheus UI frontend service, deploy it now by running the following command:

curl https://raw.githubusercontent.com/GoogleCloudPlatform/prometheus-engine/v0.4.3-gke.0/examples/frontend.yaml |
sed 's/\$PROJECT_ID/PROJECT_ID/' |
kubectl apply -n NAMESPACE_NAME -f -

See change the queried project for instructions on configuring the metrics scope used by this service to query across multiple projects.

If you have a pre-existing Grafana deployment, such as one installed by the kube-prometheus library or one installed using a helm chart, you can continue using it with Managed Service for Prometheus. If so, see Configure a data source for next steps. Otherwise, you must first deploy Grafana.

Deploy Grafana

If you don't have a running Grafana deployment in your cluster, then you can create an ephemeral test deployment to experiment with.

To create an ephemeral Grafana deployment, apply the Managed Service for Prometheus grafana.yaml manifest to your cluster, and port-forward the grafana service to your local machine. The following example forwards the service to port 3000.

  1. Apply the grafana.yaml manifest:

    kubectl -n NAMESPACE_NAME apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/prometheus-engine/v0.4.3-gke.0/examples/grafana.yaml
    
  2. Port-forward the frontend service to your local machine. This example forwards the service to port 9090:

    kubectl -n NAMESPACE_NAME port-forward svc/grafana 3000
    

    This command does not return, and while it is running, it reports accesses to the URL.

    You can access Grafana in your browser at the URL http://localhost:3000 with the username:password admin:admin. If you are using Cloud Shell for this step, you can get access by using the Web Preview button.

Configure a data source

To query Managed Service for Prometheus in Grafana by using the Prometheus UI as the authentication proxy, you must add new data source to Grafana. To add a data source for the managed service, do the following:

  1. Go to your Grafana deployment, for example, by browsing to the URL http://localhost:3000 to reach the Grafana welcome page.

  2. Select Configuration from the main Grafana menu, then select Data Sources.

    Adding a data source in Grafana.

  3. Select Add data source, and select Prometheus as the time series database.

    Adding a Prometheus data source.

  4. In the URL field of the HTTP pane, enter the URL of the Managed Service for Prometheus frontend service. If you configured the Prometheus UI to run on port 9090, then the service URL for this field is http://frontend.NAMESPACE_NAME.svc:9090.

    In the Timeout field of the HTTP pane, set the value to 120.

    In the Query timeout field, set the value to 2m.

    In the HTTP Method field, select GET.

    If you have multiple Prometheus data sources, then you might give this one a name like "Managed Prometheus Service". You can leave other fields with their default values.

    Configuring a Managed Service for Prometheus data source.

  5. Click Test and Save, and look for the message "The data source is working".

    Testing your Managed Service for Prometheus data source.

Use the new data source

You can now create Grafana dashboards using the new data source. You can also redirect existing dashboards to the new data source. The following screenshot shows a Grafana chart that displays the up metric:

Grafana chart for the Managed Service for Prometheus "up" metric.

Managed Service for Prometheus data in Cloud Monitoring

Managed Service for Prometheus shares the data-storage backend, Monarch, with Cloud Monitoring. You can use all of the tools provided by Cloud Monitoring with the data collected by Managed Service for Prometheus. For example, you can use Metrics Explorer, as described in Google Cloud console for Monitoring. You can also set alerts based on these metrics.

When working with Managed Service for Prometheus data in Cloud Monitoring, you use the query tools provided by Cloud Monitoring:

The Cloud Monitoring UI does not support PromQL, except on the Managed Service for Prometheus page.

The prometheus_target resource

In Cloud Monitoring, time-series data is written against a monitored-resource type. For Prometheus metrics, the monitored-resource type is prometheus_target. Monitoring queries for Managed Service for Prometheus data must specify this resource type.

The prometheus_target resource has the following labels, which you can use for filtering and manipulating queried data:

  • project_id: The identifier of the Google Cloud project associated with this resource.
  • location: The physical location (Google Cloud region) where the data is stored. This value is typically the region of your GKE cluster or Compute Engine instance. If data is collected from an AWS or on-premises deployment, then the value might be the closest Google Cloud region.
  • cluster: The GKE cluster or related concept; might be empty.
  • namespace: The GKE namespace or related concept; might be empty.
  • job: The job label of the Prometheus target, if known; might be empty for rule-evaluation results.
  • instance: The instance label of the Prometheus target, if known; might be empty for rule-evaluation results.

The values for these labels are set during collection.

Google Cloud console for Cloud Monitoring

To view your Managed Service for Prometheus data as Cloud Monitoring time series, you can use Metrics Explorer. To configure Metrics Explorer to display metrics, do the following:

  1. In the Google Cloud console, go to Monitoring or use the following button:

    Go to Monitoring

  2. In the Monitoring navigation pane, click Metrics Explorer.

  3. Specify the data to appear on the chart. You can use the MQL tab or the Configuration tab.

    • To use the MQL tab, do the following:

      1. Select the MQL tab.

      2. Enter the following query:

        fetch prometheus_target::prometheus.googleapis.com/up/gauge
        
      3. Click Run Query.

    • To use the Configuration tab, do the following:

      1. Select the Configuration tab.

      2. In the Resource type field, type "prometheus" to filter the list, then select Prometheus Target.

      3. In the Metric field, type "up/" to filter the list, then select prometheus/up/gauge.

The following screenshot shows the Metrics Explorer chart from the MQL tab that displays the up metric:

Metrics Explorer chart for the Managed Service for Prometheus "up" metric.

Prometheus HTTP API

Managed Service for Prometheus supports the upstream Prometheus HTTP API at the URL prefixed by https://monitoring.googleapis.com/v1/projects/PROJECT_ID/location/global/prometheus/api/v1/. For information about the supported endpoints, see API compatibility.

This API can be accessed by any tool that can interact with a standard Prometheus server. This is an API endpoint only; it doesn't serve a UI. As a Google Cloud API, the API uses OAuth2 authentication, and as part of the Cloud Monitoring API, the value of the PROJECT_ID is the scoping project of a metrics scope, so you can retrieve data from any project in the metrics scope. For more information about scoping, see Metrics scopes.

To use this endpoint, provide a PromQL expression. For example, the following instant query retrieves all time series that have the metric name up:

curl https://monitoring.googleapis.com/v1/projects/PROJECT_ID/location/global/prometheus/api/v1/query \
  -d "query=up" \
  -H "Authorization: Bearer $(gcloud auth print-access-token)"

If the request is successful, then the query returns a result like the following, which has been formatted for readability:

{
  "status":"success",
  "data":{
    "resultType":"vector",
    "result":[{
      "metric": {
        "__name__":"up",
        "cluster":"gmp-test",
        "instance":"prom-example-84c6f547f5-g4ljn:web",
        "job":"prometheus",
        "location":"us-central1-a",
        "project_id":"a-gcp-project"
      },
      "value": [1634873239.971,"1"]
    }]
  }
}

For information about querying Google Cloud system metrics using PromQL, see PromQL for Cloud Monitoring metrics.

API compatibility

The following Prometheus HTTP API endpoints are supported by Managed Service for Prometheus under the URL prefixed by https://monitoring.googleapis.com/v1/projects/PROJECT_ID/location/global/prometheus/api/v1/.

  • The following endpoints are fully supported:

    • /api/v1/query
    • /api/v1/query_range
    • /api/v1/metadata
    • /api/v1/labels

    For information about PromQL compatibility, see PromQL support.

  • The /api/v1/$label/values endpoint supports only the __name__ label. This limitation causes label_values($label) variable queries in Grafana to fail. Instead, you can use label_values($metric, $label). This type of query is recommended because it avoids fetching values for labels on metrics that are not relevant to the given dashboard.

  • The /api/v1/series endpoint is supported for GET but not POST requests. When you use the frontend proxy, the proxy manages this restriction for you. You can also configure your Prometheus datasources in Grafana to issue only GET requests.