This document describes how to configure your Google Kubernetes Engine deployment so that you can use Google Cloud Managed Service for Prometheus to collect metrics from the Kibana exporter. This document shows you how to do the following:
- Set up the Kibana exporter to report metrics.
- Configure a PodMonitoring resource for Managed Service for Prometheus to collect the exported metrics.
- Access a dashboard in Cloud Monitoring to view the metrics.
- Configure alerting rules to monitor the metrics.
These instructions apply only if you are using managed collection with Managed Service for Prometheus. If you are using self-deployed collection, then see the Kibana documentation for installation information.
These instructions are provided as an example and are expected to work in most Kubernetes environments. If you are having trouble installing an application or exporter due to restrictive security or organizational policies, then we recommend you consult open-source documentation for support.
For information about Kibana, see Kibana.
Prerequisites
To collect metrics from the Kibana exporter by using Managed Service for Prometheus and managed collection, your deployment must meet the following requirements:
- Your cluster must be running Google Kubernetes Engine version 1.21.4-gke.300 or later.
- You must be running Managed Service for Prometheus with managed collection enabled. For more information, see Get started with managed collection.
- To use dashboards available in Cloud Monitoring for the
Kibana integration, you must use
kibana-prometheus-exporter
version 8.0.0 or later.For more information about available dashboards, see View dashboards.
The installation process requires the use of bin/kibana-plugin install PLUGIN
. One way to install the plugin is
to define a custom Kibana Docker image; see the following example:
FROM kibana:KIBANA_VERSION
RUN bin/kibana-plugin install https://github.com/pjhampton/kibana-prometheus-exporter/releases/download/PLUGIN_VERSION/kibanaPrometheusExporter-PLUGIN_VERSION.zip
After building the image and pushing it to a remote repository, it can be used within the Kubernetes deployment. For example:
apiVersion: v1
kind: ConfigMap
metadata:
name: kibana
data:
kibana.yml: |
server.name: kibana
server.host: "0.0.0.0"
# Update this with credentials to match your Elasticsearch instance
elasticsearch.hosts: http://username:password@elasticsearch-service-name:9200
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
labels:
app.kubernetes.io/name: kibana
spec:
...
template:
...
spec:
containers:
- name: kibana
image: CUSTOM_IMAGE
ports:
- containerPort: 5601
name: kibana
protocol: TCP
volumeMounts:
- mountPath: /usr/share/kibana/config/kibana.yml
subPath: kibana.yml
name: kibana
volumes:
- name: kibana
configMap:
name: kibana
items:
- key: kibana.yml
path: kibana.yml
To verify that the Kibana exporter is emitting metrics on the expected endpoints, do the following:
Set up port-forwarding with the following command:
kubectl -n NAMESPACE_NAME port-forward POD_NAME 5601
Access the endpoint
localhost:5601/_prometheus/metrics
by using the browser or thecurl
utility in another terminal session.
Define a PodMonitoring resource
For target discovery, the Managed Service for Prometheus Operator requires a PodMonitoring resource that corresponds to the Kibana exporter in the same namespace.
You can use the following PodMonitoring configuration:
To apply configuration changes from a local file, run the following command:
kubectl apply -n NAMESPACE_NAME -f FILE_NAME
You can also use Terraform to manage your configurations.
Define rules and alerts
You can use the following Rules
configuration to define
alerts on your Kibana metrics:
To apply configuration changes from a local file, run the following command:
kubectl apply -n NAMESPACE_NAME -f FILE_NAME
You can also use Terraform to manage your configurations.
For more information about applying rules to your cluster, see Managed rule evaluation and alerting.
You can adjust the alert thresholds to suit your application.Verify the configuration
You can use Metrics Explorer to verify that you correctly configured the Kibana exporter. It might take one or two minutes for Cloud Monitoring to ingest your metrics.
To verify the metrics are ingested, do the following:
-
In the Google Cloud console, go to the leaderboard Metrics explorer page:
If you use the search bar to find this page, then select the result whose subheading is Monitoring.
- In the toolbar of the query-builder pane, select the button whose name is either code MQL or code PromQL.
- Verify that PromQL is selected in the Language toggle. The language toggle is in the same toolbar that lets you format your query.
- Enter and run the following query:
up{job="kibana", cluster="CLUSTER_NAME", namespace="NAMESPACE_NAME"}
View dashboards
The Cloud Monitoring integration includes the Kibana Prometheus Overview dashboard. Dashboards are automatically installed when you configure the integration. You can also view static previews of dashboards without installing the integration.
To view an installed dashboard, do the following:
-
In the Google Cloud console, go to the Dashboards page:
If you use the search bar to find this page, then select the result whose subheading is Monitoring.
- Select the Dashboard List tab.
- Choose the Integrations category.
- Click the name of the dashboard, for example, Kibana Prometheus Overview.
To view a static preview of the dashboard, do the following:
-
In the Google Cloud console, go to the Integrations page:
If you use the search bar to find this page, then select the result whose subheading is Monitoring.
- Click the Kubernetes Engine deployment-platform filter.
- Locate the Kibana integration and click View Details.
- Select the Dashboards tab.
Troubleshooting
For information about troubleshooting metric ingestion problems, see Problems with collection from exporters in Troubleshooting ingestion-side problems.