如要探索目標,Managed Service for Prometheus Operator 需要與 vLLM 對應的 PodMonitoring 資源 (位於相同命名空間)。
您可以使用下列 PodMonitoring 設定:
# Copyright 2025 Google LLC## Licensed under the Apache License, Version 2.0 (the "License");# you may not use this file except in compliance with the License.# You may obtain a copy of the License at## https://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.apiVersion:monitoring.googleapis.com/v1kind:PodMonitoringmetadata:name:vllmlabels:app.kubernetes.io/name:vllmapp.kubernetes.io/part-of:google-cloud-managed-prometheusspec:endpoints:-port:8000scheme:httpinterval:30spath:/metricsselector:matchLabels:app:vllm-gemma-server
[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-08-12 (世界標準時間)。"],[],[],null,["# vLLM\n\n\u003cbr /\u003e\n\nThis document describes how to configure your Google Kubernetes Engine deployment\nso that you can use Google Cloud Managed Service for Prometheus to collect metrics from\n\nvLLM. This document shows you how to do the following:\n\n- Set up vLLM to report metrics.\n- Configure a PodMonitoring resource for Managed Service for Prometheus to collect the exported metrics.\n- Access a dashboard in Cloud Monitoring to view the metrics.\n\n\u003cbr /\u003e\n\nThese instructions apply only if you are using [managed collection](/stackdriver/docs/managed-prometheus/setup-managed)\nwith Managed Service for Prometheus.\nIf you are using self-deployed collection, then see the\n\n[vLLM documentation](https://docs.vllm.ai/en/stable/serving/metrics.html)\n\nfor installation information.\n\nThese instructions are provided as an example and are expected to work in\nmost Kubernetes environments.\n\nIf you are having trouble installing an\napplication or exporter due to restrictive security or organizational policies,\nthen we recommend you consult open-source documentation for support.\n\nFor information about vLLM, see [vLLM](https://docs.vllm.ai/en/latest/).\n\nFor information about setting up vLLM on Google Kubernetes Engine,\nsee the GKE [guide for vLLM](/kubernetes-engine/docs/tutorials/serve-gemma-gpu-vllm).\n\nPrerequisites\n-------------\n\nTo collect metrics from\n\nvLLM\n\nby using\nManaged Service for Prometheus and managed collection, your deployment must\nmeet the following requirements:\n\n- Your cluster must be running Google Kubernetes Engine version 1.21.4-gke.300 or later.\n- You must be running Managed Service for Prometheus with managed collection enabled. For more information, see [Get started with managed collection](/stackdriver/docs/managed-prometheus/setup-managed).\n\n \u003cbr /\u003e\n\n\u003cbr /\u003e\n\nvLLM exposes Prometheus-format metrics automatically; you do not have to install it separately. To verify that vLLM is emitting metrics on the expected endpoints, do the following:\n\n\u003cbr /\u003e\n\n1. Set up port forwarding by using the following command: \n\n ```\n kubectl -n NAMESPACE_NAME port-forward POD_NAME 8000\n ```\n2. Access the endpoint `localhost:8000/metrics` by using the browser or the `curl` utility in another terminal session.\n\nDefine a PodMonitoring resource\n-------------------------------\n\nFor target discovery, the Managed Service for Prometheus Operator\nrequires a PodMonitoring resource that corresponds to vLLM\nin the same namespace.\n\nYou can use the following PodMonitoring configuration: \n\n # Copyright 2025 Google LLC\n #\n # Licensed under the Apache License, Version 2.0 (the \"License\");\n # you may not use this file except in compliance with the License.\n # You may obtain a copy of the License at\n #\n # https://www.apache.org/licenses/LICENSE-2.0\n #\n # Unless required by applicable law or agreed to in writing, software\n # distributed under the License is distributed on an \"AS IS\" BASIS,\n # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n # See the License for the specific language governing permissions and\n # limitations under the License.\n\n apiVersion: monitoring.googleapis.com/v1\n kind: PodMonitoring\n metadata:\n name: vllm\n labels:\n app.kubernetes.io/name: vllm\n app.kubernetes.io/part-of: google-cloud-managed-prometheus\n spec:\n endpoints:\n - port: 8000\n scheme: http\n interval: 30s\n path: /metrics\n selector:\n matchLabels:\n app: vllm-gemma-server\n\nEnsure that the values of the `port` and `matchLabels` fields match those of the vLLM pods you want to monitor.\n\nTo apply configuration changes from a local file, run the following command:\n\n```\nkubectl apply -n NAMESPACE_NAME -f FILE_NAME\n```\n\n\u003cbr /\u003e\n\nYou can also\n[use Terraform](/stackdriver/docs/managed-prometheus/setup-managed#terraform-scrape)\nto manage your configurations.\n\nVerify the configuration\n------------------------\n\nYou can use Metrics Explorer to verify that you correctly configured\nvLLM. It might take one or two minutes for\nCloud Monitoring to ingest your metrics.\n\nTo verify the metrics are ingested, do the following:\n\n1. In the Google Cloud console, go to the\n *leaderboard* **Metrics explorer** page:\n\n [Go to **Metrics explorer**](https://console.cloud.google.com/monitoring/metrics-explorer)\n\n \u003cbr /\u003e\n\n If you use the search bar to find this page, then select the result whose subheading is\n **Monitoring**.\n2. In the toolbar of the query-builder pane, select the button whose name is either *code* **MQL** or *code* **PromQL**.\n3. Verify that **PromQL** is selected in the **Language** toggle. The language toggle is in the same toolbar that lets you format your query.\n4. Enter and run the following query: \n\n ```\n up{job=\"vllm\", cluster=\"CLUSTER_NAME\", namespace=\"NAMESPACE_NAME\"}\n ```\n\n\u003cbr /\u003e\n\nView dashboards\n---------------\n\nThe Cloud Monitoring integration includes\n\nthe **vLLM Prometheus Overview** dashboard.\n\nDashboards are automatically installed when you configure the integration.\nYou can also view static previews of dashboards without installing the\nintegration.\n\n\nTo view an installed dashboard, do the following:\n\n1. In the Google Cloud console, go to the **Dashboards** page:\n\n [Go to **Dashboards**](https://console.cloud.google.com/monitoring/dashboards)\n\n \u003cbr /\u003e\n\n If you use the search bar to find this page, then select the result whose subheading is\n **Monitoring**.\n2. Select the **Dashboard List** tab.\n3. Choose the **Integrations** category.\n4. Click the name of the dashboard, for example, **vLLM Prometheus Overview**.\n\n\u003cbr /\u003e\n\nTo view a static preview of the dashboard, do the following:\n\n1. In the Google Cloud console, go to the\n **Integrations**\n page:\n\n [Go to **Integrations**](https://console.cloud.google.com/monitoring/integrations)\n\n \u003cbr /\u003e\n\n If you use the search bar to find this page, then select the result whose subheading is\n **Monitoring**.\n2. Click the **Kubernetes Engine** deployment-platform filter.\n3. Locate the vLLM integration and click **View Details**.\n4. Select the **Dashboards** tab.\n\n\u003cbr /\u003e\n\nTroubleshooting\n---------------\n\nFor information about troubleshooting metric ingestion problems, see\n[Problems with collection from exporters](/stackdriver/docs/managed-prometheus/troubleshooting#exporter-problems) in [Troubleshooting ingestion-side problems](/stackdriver/docs/managed-prometheus/troubleshooting#ingest-problems)."]]