This document describes how to configure your Google Kubernetes Engine deployment so that you can use Google Cloud Managed Service for Prometheus to collect metrics from TensorFlow Serving. This document shows you how to do the following:
- Set up TF Serving to report metrics.
- Configure a PodMonitoring resource for Managed Service for Prometheus to collect the exported metrics.
- Access a dashboard in Cloud Monitoring to view the metrics.
These instructions apply only if you are using managed collection with Managed Service for Prometheus. If you are using self-deployed collection, then see the TF Serving documentation for installation information.
These instructions are provided as an example and are expected to work in most Kubernetes environments. If you are having trouble installing an application or exporter due to restrictive security or organizational policies, then we recommend you consult open-source documentation for support.
For information about TensorFlow Serving, see TF Serving. For information about serving TF Serving models on Google Kubernetes Engine, see the GKE set-up guide for TF Serving.
Prerequisites
To collect metrics from TF Serving by using Managed Service for Prometheus and managed collection, your deployment must meet the following requirements:
- Your cluster must be running Google Kubernetes Engine version 1.21.4-gke.300 or later.
- You must be running Managed Service for Prometheus with managed collection enabled. For more information, see Get started with managed collection.
TF Serving exposes Prometheus-format metrics when the --monitoring_config_file
flag
is used to specify a file containing a MonitoringConfig protocol buffer.
The following is an example of a MonitoringConfig protocol buffer:
To specify the MonitoringConfig protocol buffer, do the following:
Create a file named
monitoring_config.txt
containing the MonitoringConfig protocol buffer in the model directory, before uploading the directory to the Cloud Storage bucket.Upload the model directory to the Cloud Storage bucket:
gcloud storage cp MODEL_DIRECTORY gs://CLOUD_STORAGE_BUCKET_NAME --recursive
Set the environment variable
PATH_TO_MONITORING_CONFIG
to the path of the uploadedmonitoring_config.txt
file, for example:export PATH_TO_MONITORING_CONFIG=/data/tfserve-model-repository/monitoring_config.txt
Add the following flag and value to the container's command in your container's deployment YAML file:
"--monitoring_config=$PATH_TO_MONITORING_CONFIG"
For example, your command might look like the following:
command: [ "tensorflow_model_server", "--model_name=$MODEL_NAME", "--model_base_path=/data/tfserve-model-repository/$MODEL_NAME", "--rest_api_port=8000", "--monitoring_config_file=$PATH_TO_MONITORING_CONFIG" ]
Modify the TF Serving configuration
Modify the TF Serving configuration as shown in the following example:
You must add any lines preceded by the +
symbol to your
configuration.
To apply configuration changes from a local file, run the following command:
kubectl apply -n NAMESPACE_NAME -f FILE_NAME
You can also use Terraform to manage your configurations.
To verify that TF Serving is emitting metrics on the expected endpoints, do the following:- Set up port forwarding by using the following command:
kubectl -n NAMESPACE_NAME port-forward POD_NAME 8000
- Access the endpoint
localhost:8000/monitoring/prometheus/metrics
by using the browser or thecurl
utility in another terminal session.
Define a PodMonitoring resource
For target discovery, the Managed Service for Prometheus Operator requires a PodMonitoring resource that corresponds to TF Serving in the same namespace.
You can use the following PodMonitoring configuration:
To apply configuration changes from a local file, run the following command:
kubectl apply -n NAMESPACE_NAME -f FILE_NAME
You can also use Terraform to manage your configurations.
Verify the configuration
You can use Metrics Explorer to verify that you correctly configured TF Serving. It might take one or two minutes for Cloud Monitoring to ingest your metrics.
To verify the metrics are ingested, do the following:
-
In the Google Cloud console, go to the leaderboard Metrics explorer page:
If you use the search bar to find this page, then select the result whose subheading is Monitoring.
- In the toolbar of the query-builder pane, select the button whose name is either code MQL or code PromQL.
- Verify that PromQL is selected in the Language toggle. The language toggle is in the same toolbar that lets you format your query.
- Enter and run the following query:
up{job="tfserve", cluster="CLUSTER_NAME", namespace="NAMESPACE_NAME"}
View dashboards
The Cloud Monitoring integration includes the TensorFlow Serving Prometheus Overview dashboard. Dashboards are automatically installed when you configure the integration. You can also view static previews of dashboards without installing the integration.
To view an installed dashboard, do the following:
-
In the Google Cloud console, go to the
Dashboards page:
If you use the search bar to find this page, then select the result whose subheading is Monitoring.
- Select the Dashboard List tab.
- Choose the Integrations category.
- Click the name of the dashboard, for example, TensorFlow Serving Prometheus Overview.
To view a static preview of the dashboard, do the following:
-
In the Google Cloud console, go to the
Integrations page:
If you use the search bar to find this page, then select the result whose subheading is Monitoring.
- Click the Kubernetes Engine deployment-platform filter.
- Locate the TensorFlow Serving integration and click View Details.
- Select the Dashboards tab.
Troubleshooting
For information about troubleshooting metric ingestion problems, see Problems with collection from exporters in Troubleshooting ingestion-side problems.