This document describes how to configure your Google Kubernetes Engine deployment so that you can use Google Cloud Managed Service for Prometheus to collect metrics from Istio. This document shows you how to do the following:
- Set up Istio to report metrics.
- Configure a PodMonitoring resource for Managed Service for Prometheus to collect the exported metrics.
- Access a dashboard in Cloud Monitoring to view the metrics.
- Configure alerting rules to monitor the metrics.
These instructions apply only if you are using managed collection with Managed Service for Prometheus. If you are using self-deployed collection, then see the source repository for Istio for installation information.
These instructions are provided as an example and are expected to work in most Kubernetes environments. If you are having trouble installing an application or exporter due to restrictive security or organizational policies, then we recommend you consult open-source documentation for support.
For information about Istio, see Istio.
Prerequisites
To collect metrics from Istio by using Managed Service for Prometheus and managed collection, your deployment must meet the following requirements:
- Your cluster must be running Google Kubernetes Engine version 1.21.4-gke.300 or later.
- You must be running Managed Service for Prometheus with managed collection enabled. For more information, see Get started with managed collection.
Istio exposes Prometheus-format metrics automatically; you do not have to install it separately. You can run the following checks to verify that the Istio Proxy has been injected as a sidecar and that both Istiod, the control plane of Istio, and Istio Proxy are emitting metrics on the expected endpoints.
To determine if Istio Proxy is injected as a sidecar, run the following command, which enumerates the containers running in the application's pods:
kubectl get pod -l app=APPLICATION_NAME -n NAMESPACE_NAME -o jsonpath='{.items[0].spec.containers[*].name}'
If you see that the pods contain the
istio
sidecar container, then the exporter has been injected. If the sidecar is not injected, then follow the instructions at Istio: Installing the sidecar.To verify that metrics are being emitted by the Istio Proxy, run the following command, which inspects the
/stats/prometheus
endpoint of theistio
on the specified pod:kubectl exec POD_NAME -n NAMESPACE_NAME -c istio-proxy -- curl -sS 'localhost:15090/stats/prometheus'
If you see raw
istio_*
andenvoy_*
Prometheus metrics, then metrics are being emitted correctly.To verify that metrics are being emitted similarly on Istiod, run the following command, which inspects the
/metrics
endpoint of Istiod on one of the pods in theistiod
deployment:kubectl exec -n istio-system deployment/istiod -- curl -sS 'localhost:15014/metrics'
Define a PodMonitoring resource
For target discovery, the Managed Service for Prometheus Operator requires a PodMonitoring resource that corresponds to the Istio exporter in the same namespace.
You can use the following PodMonitoring configuration:
Istio requires two separate PodMonitoring resources: One that monitors Istiod and another one that monitors the Istio Proxy sidecars and the ingress and egress gateways. To monitor Istio Proxy metrics across all namespaces in the cluster at once, apply theistio-proxy
PodMonitoring to every namespace or set up a
ClusterPodMonitoring
resource instead of a PodMonitoring resource per namespace.
If you plan to use the Istio-provided Grafana dashboards then in addition to the PodMonitoring resources described in this document ensure that you also configured cAdvisor and Kubelet scraping.
To apply configuration changes from a local file, run the following command:
kubectl apply -n NAMESPACE_NAME -f FILE_NAME
You can also use Terraform to manage your configurations.
Define rules and alerts
You can use the following Rules
configuration to define
alerts on your Istio metrics:
To apply configuration changes from a local file, run the following command:
kubectl apply -n NAMESPACE_NAME -f FILE_NAME
You can also use Terraform to manage your configurations.
For more information about applying rules to your cluster, see Managed rule evaluation and alerting.
ThisRules
configuration was adapted from the
Istio rules provided by
Awesome Prometheus Alerts.
You can adjust the alert thresholds to suit your application.
Verify the configuration
You can use Metrics Explorer to verify that you correctly configured the Istio exporter. It might take one or two minutes for Cloud Monitoring to ingest your metrics.
To verify the metrics are ingested, do the following:
-
In the Google Cloud console, go to the leaderboard Metrics explorer page:
If you use the search bar to find this page, then select the result whose subheading is Monitoring.
- In the toolbar of the query-builder pane, select the button whose name is either code MQL or code PromQL.
- Verify that PromQL is selected in the Language toggle. The language toggle is in the same toolbar that lets you format your query.
- Enter and run the following query:
sum(istio_build{cluster="CLUSTER_NAME"}) by (component)
View dashboards
The Cloud Monitoring integration includes the Istio Envoy Prometheus Overview dashboard. Dashboards are automatically installed when you configure the integration. You can also view static previews of dashboards without installing the integration.
To view an installed dashboard, do the following:
-
In the Google Cloud console, go to the Dashboards page:
If you use the search bar to find this page, then select the result whose subheading is Monitoring.
- Select the Dashboard List tab.
- Choose the Integrations category.
- Click the name of the dashboard, for example, Istio Envoy Prometheus Overview.
To view a static preview of the dashboard, do the following:
-
In the Google Cloud console, go to the Integrations page:
If you use the search bar to find this page, then select the result whose subheading is Monitoring.
- Click the Kubernetes Engine deployment-platform filter.
- Locate the Istio integration and click View Details.
- Select the Dashboards tab.
Troubleshooting
For information about troubleshooting metric ingestion problems, see Problems with collection from exporters in Troubleshooting ingestion-side problems.