Esta página está destinada a administradores de TI y operadores que administran el ciclo de vida de la infraestructura tecnológica subyacente. Para obtener más información sobre los roles comunes y las tareas de ejemplo a las que hacemos referencia en el contenido de Google Cloud , consulta Tareas y roles comunes de los usuarios de GKE Enterprise.
Antes de comenzar
Para usar Google Cloud Managed Service para Prometheus y recopilar métricas de DCGM, tu implementación de Google Distributed Cloud debe cumplir con los siguientes requisitos:
Configurar un recurso PodMonitoring para que Google Cloud Managed Service para Prometheus recopile las métricas exportadas. Si tienes problemas para instalar una aplicación o un exportador debido a políticas restringidas de la organización o de seguridad, te recomendamos que consultes la documentación de código abierto a fin de obtener asistencia.
Para transferir los datos de métrica que emite el pod del exportador de DCGM (nvidia-dcgm-exporter), Google Cloud Managed Service para Prometheus usa la recopilación de objetivos. La recopilación de objetivos y la transferencia de métricas se configuran mediante los recursos personalizados de Kubernetes.
El servicio administrado usa recursos personalizados de PodMonitoring.
Un recurso personalizado de PodMonitoring recopila objetivos solo en el espacio de nombres en el que se implementa. Para recopilar objetivos en varios espacios de nombres, implementa el mismo recurso personalizado de PodMonitoring en cada espacio de nombres.
Crea un archivo de manifiesto con la siguiente configuración:
La sección selector del manifiesto especifica que se seleccionó el Pod del exportador de DCGM, nvidia-dcgm-exporter, para la supervisión. Este pod se implementa cuando instalas el operador de GPU de NVIDIA.
Puedes usar el Explorador de métricas para verificar que configuraste correctamente el exportador de DCGM. Cloud Monitoring puede tardar uno o dos minutos en transferir las métricas.
Para verificar que se hayan transferido las métricas, haz lo siguiente:
En la Google Cloud consola, ve a la página
leaderboardExplorador de métricas:
Si usas la barra de búsqueda para encontrar esta página, selecciona el resultado cuyo subtítulo es Monitoring.
Usa el lenguaje de consulta de Prometheus (PromQL) para especificar los datos que se mostrarán en el gráfico:
En la barra de herramientas del panel del compilador de consultas, haz clic en < > PromQL.
Ingresa tu consulta en el editor de consultas. Por ejemplo, para representar gráficamente la cantidad promedio de segundos que las CPUs pasaron en cada modo durante la última hora, usa la siguiente consulta:
[[["Fácil de comprender","easyToUnderstand","thumb-up"],["Resolvió mi problema","solvedMyProblem","thumb-up"],["Otro","otherUp","thumb-up"]],[["Difícil de entender","hardToUnderstand","thumb-down"],["Información o código de muestra incorrectos","incorrectInformationOrSampleCode","thumb-down"],["Faltan la información o los ejemplos que necesito","missingTheInformationSamplesINeed","thumb-down"],["Problema de traducción","translationIssue","thumb-down"],["Otro","otherDown","thumb-down"]],["Última actualización: 2025-07-31 (UTC)"],[],[],null,["If your [cluster has nodes that use NVIDIA\nGPUs](/kubernetes-engine/distributed-cloud/bare-metal/docs/how-to/gpu-manual-use), you can monitor the GPU utilization,\nperformance, and health by configuring the cluster to send [NVIDIA Data Center\nGPU Manager (DCGM)](https://developer.nvidia.com/dcgm) metrics to\nCloud Monitoring. This solution uses Google Cloud Managed Service for Prometheus to collect\nmetrics from NVIDIA DCGM.\n\nThis page is for IT administrators and Operators who manage the\nlifecycle of the underlying tech infrastructure. To learn more about common\nroles and example tasks that we reference in Google Cloud content, see [Common\nGKE user roles and\ntasks](/kubernetes-engine/enterprise/docs/concepts/roles-tasks).\n\nBefore you begin\n\nTo use Google Cloud Managed Service for Prometheus to collect metrics from DCGM, your\nGoogle Distributed Cloud deployment must meet the following requirements:\n\n- NVIDIA [DCGM-Exporter tool](https://github.com/NVIDIA/dcgm-exporter) must be\n already installed on your cluster. DCGM-Exporter is installed when you\n install NVIDIA GPU Operator. For NVIDIA GPU Operator installation instructions, see\n [Install and verify the\n NVIDIA GPU Operator](/kubernetes-engine/distributed-cloud/bare-metal/docs/how-to/gpu-manual-use#install_verify).\n\n- Google Cloud Managed Service for Prometheus must be enabled. For instructions, see [Enable\n Google Cloud Managed Service for Prometheus](/kubernetes-engine/distributed-cloud/bare-metal/docs/how-to/application-logging-monitoring#enable_managed_prometheus).\n\nConfigure a PodMonitoring resource\n\nConfigure a PodMonitoring resource for Google Cloud Managed Service for Prometheus to\ncollect the exported metrics. If you are having trouble installing an\napplication or exporter due to restrictive security or organizational policies,\nthen we recommend you consult open-source documentation for support.\n\nTo ingest the metric data emitted by the DCGM Exporter Pod\n(`nvidia-dcgm-exporter`), Google Cloud Managed Service for Prometheus\nuses target scraping. Target scraping and metrics ingestion are configured using\nKubernetes [custom\nresources](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/).\nThe managed service uses\n[PodMonitoring](https://github.com/GoogleCloudPlatform/prometheus-engine/blob/v0.13.0/doc/api.md#podmonitoring)\ncustom resources.\n\nA PodMonitoring custom resource scrapes targets in the namespace in which it's\ndeployed only. To scrape targets in multiple namespaces, deploy the same\nPodMonitoring custom resource in each namespace.\n\n1. Create a manifest file with the following configuration:\n\n The `selector` section in the manifest specifies that the DCGM Exporter Pod,\n `nvidia-dcgm-exporter`, is selected for monitoring. This Pod is deployed\n when you install the NVIDIA GPU Operator. \n\n apiVersion: monitoring.googleapis.com/v1\n kind: PodMonitoring\n metadata:\n name: dcgm-gmp\n spec:\n selector:\n matchLabels:app: nvidia-dcgm-exporter\n endpoints:\n - port: metrics\n interval: 30s\n\n2. Deploy the PodMonitoring custom resource:\n\n kubectl apply -n \u003cvar translate=\"no\"\u003eNAMESPACE\u003c/var\u003e -f \u003cvar translate=\"no\"\u003eFILENAME\u003c/var\u003e --kubeconfig \u003cvar translate=\"no\"\u003eKUBECONFIG\u003c/var\u003e\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eNAMESPACE\u003c/var\u003e: the namespace into which you're\n deploying the PodMonitoring custom resource.\n\n - \u003cvar translate=\"no\"\u003eFILENAME\u003c/var\u003e: the path of the manifest file for the\n PodMonitoring custom resource.\n\n - \u003cvar translate=\"no\"\u003eKUBECONFIG\u003c/var\u003e: the path of the kubeconfig file for the\n cluster.\n\n3. Verify that the PodMonitoring custom resource is installed in the\n intended namespace, run the following command:\n\n kubectl get podmonitoring -n \u003cvar translate=\"no\"\u003eNAMESPACE\u003c/var\u003e --kubeconfig \u003cvar translate=\"no\"\u003eKUBECONFIG\u003c/var\u003e\n\n The output should look similar to the following: \n\n NAME AGE\n dcgm-gmp 3m37s\n\nVerify the configuration\n\nYou can use Metrics Explorer to verify that you correctly configured the\nDCGM exporter. It might take one or two minutes for Cloud Monitoring to\ningest your metrics.\n\nTo verify the metrics are ingested, do the following:\n\n1. In the Google Cloud console, go to the\n *leaderboard* **Metrics explorer** page:\n\n [Go to **Metrics explorer**](https://console.cloud.google.com/monitoring/metrics-explorer)\n\n \u003cbr /\u003e\n\n If you use the search bar to find this page, then select the result whose subheading is\n **Monitoring**.\n2. Use Prometheus Query Language (PromQL) to specify the data to display on the\n chart:\n\n 1. In the toolbar of the query-builder pane, click **\\\u003c \\\u003e PromQL**.\n\n 2. Enter your query into the query editor. For example, to chart the\n average number of seconds CPUs spent in each mode over the past hour,\n use the following query:\n\n DCGM_FI_DEV_GPU_UTIL{cluster=\"\u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-s\"\u003eCLUSTER_NAME\u003c/span\u003e\u003c/var\u003e\", namespace=\"\u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-s\"\u003eNAMESPACE\u003c/span\u003e\u003c/var\u003e\"}\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e: the name of the cluster with nodes\n that are using GPUs.\n\n - \u003cvar translate=\"no\"\u003eNAMESPACE\u003c/var\u003e: the namespace into which you deployed\n the PodMonitoring custom resource.\n\n For more information about using PromQL, see [PromQL in\n Cloud Monitoring](/monitoring/promql)."]]