Monitoring에서 사용할 수 있는 리소스를 알아보려면 모니터링 리소스 유형을 참고하세요.
PromQL 쿼리 작성
Cloud Monitoring의 측정항목을 쿼리하려면 Prometheus 쿼리 언어 (PromQL)을 사용하세요. 이 섹션에서는 더 작은 절부터 시작하여 시나리오에 맞는 PromQL 쿼리를 작성하는 방법을 보여줍니다.
이 섹션에서는 PromQL에 대한 사전 지식이 없다고 가정합니다. 자세한 내용은 Cloud Monitoring의 PromQL을 참고하세요.
측정항목 이름을 PromQL 문자열에 매핑
PromQL 쿼리에서 Monitoring 측정항목을 사용하려면 다음과 같이 측정항목 이름을 PromQL 문자열에 매핑해야 합니다.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-09-04(UTC)"],[],[],null,["# Customize the Dataflow monitoring dashboard\n\nThis page shows how to customize the Dataflow\n[project monitoring dashboard](/dataflow/docs/guides/project-monitoring), by\nadding a graph that queries Cloud Monitoring metrics.\n\nThe project monitoring dashboard lets you monitor a collection of jobs and see\ntheir overall health and performance. The dashboard contains a default set of\ncharts that are useful for most workloads. By customizing the dashboard, you can\nadd charts that are specific to your business requirements.\n\nExample scenario\n----------------\n\nTo show how you might customize the dashboard, assume that an organization wants\nto track the estimated cost of the top 25 jobs in their project.\n\nFor this scenario, assume that the jobs have the following characteristics:\n\n- The jobs are all streaming jobs.\n- The jobs use the [data-processed billing model](/dataflow/pricing#compute-resources).\n- The jobs save files to [Persistent Disk](/compute/docs/disks/persistent-disks#disk-types), both Standard (HDD) and SSD.\n- The jobs don't use GPUs.\n\nChoose metrics\n--------------\n\nThe following metrics are needed to estimate the cost of a job, given the\nassumptions listed the previous section:\n\nFor more information, see [Dataflow pricing](/dataflow/pricing).\n| **Important:** The actual cost to run a job depends on factors such as Dataflow pricing model, [committed use discounts (CUDs)](/dataflow/docs/cuds), and contractual discounts. Dataflow job metrics can give you a baseline for cost optimization, but the actual costs might differ.\n\nTo learn which resources are available in Monitoring, see\n[Monitored resource types](/monitoring/api/resources).\n\nWrite a PromQL query\n--------------------\n\nTo query metrics from Cloud Monitoring, use\n[Prometheus Query Language (PromQL)](/monitoring/promql). This section shows how to write\na PromQL query for the scenario by building it up from smaller clauses.\nThis section doesn't assume any prior knowledge of PromQL. For more\ninformation, see [PromQL in Cloud Monitoring](/monitoring/promql).\n\n### Map metric names to PromQL strings\n\nTo use Monitoring metrics in PromQL queries, you must map\nthe metric name to a PromQL string, as follows:\n\nFor more information, see\n[Mapping Cloud Monitoring metrics to PromQL](/monitoring/promql/promql-mapping).\n\n### Build the query\n\nTo get the estimated cost, calculate the prices for each component based on\nthe most current value of each metric. The metrics are sampled every 60 seconds,\nso to get the latest value, use a 1-minute windowing function and take the\nmaximum value within each window.\n\n- To get the estimated CPU cost, first convert `job/total_vcpu_time` from\n seconds to hours. Multiply by CPU price per vCPU per hour.\n\n # ((vCPU time)[Bucket 1m] / Seconds per hour * vCPU Price)\n max_over_time(dataflow_googleapis_com:job_total_vcpu_time[1m]) / 3600 * \u003cvar translate=\"no\"\u003eCPU_PRICE\u003c/var\u003e\n\n This formula gives the estimated CPU cost for all jobs in the project. To get\n the estimated CPU cost per job, use the `sum` aggregation operator and group\n by job ID. \n\n sum(\n max_over_time(dataflow_googleapis_com:job_total_vcpu_time[1m]) / 3600 * \u003cvar translate=\"no\"\u003eCPU_PRICE\u003c/var\u003e\n ) by (job_id)\n\n- To get the estimated memory cost, convert `job/total_memory_usage_time` from\n seconds to hours. Multiply by memory price per GB per hour and group by job\n ID.\n\n #((Memory time)[Bucket 1m] / Seconds per hour) * Memory Price\n sum(\n max_over_time(dataflow_googleapis_com:job_total_memory_usage_time[1m]) / 3600 * \u003cvar translate=\"no\"\u003eMEMORY_PRICE\u003c/var\u003e\n ) by (job_id)\n\n- To get the estimated shuffle cost, convert\n `job/total_streaming_data_processed` from bytes to GB. Multiply by the price\n of data processed during shuffle per GB and group by job ID.\n\n # Shuffle Billing. Reported once every 60 seconds, measured in bytes.\n # Formula: (Shuffle Data)[Bucket 1m] / (Bytes in GB) * (Shuffle Price)\n sum(\n max_over_time(\n dataflow_googleapis_com:job_total_streaming_data_processed[1m]\n ) / 1000000000 * \u003cvar translate=\"no\"\u003eSHUFFLE_PRICE\u003c/var\u003e\n ) by (job_id)\n\n- To get the estimated Persistent Disk usage cost, indexing on the\n `storage_type` label to separate by disk type (Standard or SSD). Convert each\n value from GB-seconds to GB-hours and group by job ID.\n\n # Formula: ((Standard PD time)[Bucket 1m] / Seconds per hour) * Standard PD price\n sum(\n max_over_time(\n dataflow_googleapis_com:job_total_pd_usage_time{storage_type=\"HDD\"}[1m]\n ) / 3600 * \u003cvar translate=\"no\"\u003eSTANDARD_PD_PRICE\u003c/var\u003e\n ) by (job_id)\n\n # Formula: ((SSD PD time)[Bucket 1m] / Seconds per hour) * SSD PD price\n sum(\n max_over_time(\n dataflow_googleapis_com:job_total_pd_usage_time{storage_type=\"SSD\"}[1m]\n ) / 3600 * \u003cvar translate=\"no\"\u003eSSD_PD_PRICE\u003c/var\u003e\n ) by (job_id)\n\n- Summing the previous values gives the estimated cost per job. To get the top\n 25 jobs, use a top K filter:\n\n topk(25,\n # Sum the individual values.\n )\n\n### Write the complete query\n\nThe following shows the complete query: \n\n topk(25,\n sum(\n max_over_time(\n dataflow_googleapis_com:job_total_vcpu_time[1m]) / 3600 * \u003cvar translate=\"no\"\u003eCPU_PRICE\u003c/var\u003e\n )\n by (job_id) +\n sum(\n max_over_time(\n dataflow_googleapis_com:job_total_memory_usage_time[1m]) / 3600 * \u003cvar translate=\"no\"\u003eMEMORY_PRICE\u003c/var\u003e\n )\n by (job_id) +\n sum(\n max_over_time(\n dataflow_googleapis_com:job_total_streaming_data_processed[1m]) / 1000000000 * \u003cvar translate=\"no\"\u003eSHUFFLE_PRICE\u003c/var\u003e\n )\n by (job_id) +\n sum(\n max_over_time(\n dataflow_googleapis_com:job_total_pd_usage_time{storage_type=\"HDD\"}[1m]) / 3600 * \u003cvar translate=\"no\"\u003eSTANDARD_PD_PRICE\u003c/var\u003e\n )\n by (job_id) +\n sum(\n max_over_time(\n dataflow_googleapis_com:job_total_pd_usage_time{storage_type=\"SSD\"}[1m]) / 3600 * \u003cvar translate=\"no\"\u003eSSD_PD_PRICE\u003c/var\u003e\n )\n by (job_id)\n )\n\nReplace the following variables with values from the\n[Dataflow pricing page](/dataflow/pricing) for your region.\n\n- \u003cvar translate=\"no\"\u003eCPU_PRICE\u003c/var\u003e: CPU price, per vCPU per hour\n- \u003cvar translate=\"no\"\u003eMEMORY_PRICE\u003c/var\u003e: Memory price, per GB per hour\n- \u003cvar translate=\"no\"\u003eSHUFFLE_PRICE\u003c/var\u003e: Shuffle price, per GB\n- \u003cvar translate=\"no\"\u003eSTANDARD_PD_PRICE\u003c/var\u003e: Standard Persistent Disk price, per GB per hour\n- \u003cvar translate=\"no\"\u003eSSD_PD_PRICE\u003c/var\u003e: SSD Persistent Disk price, per GB per hour\n\nEdit the dashboard\n------------------\n\nIf you didn't previously create a custom dashboard for Dataflow\nin this project, create one as follows:\n\n1. In the Google Cloud console, go to the **Dataflow**\n \\\u003e **Monitoring** page.\n\n [Go to Monitoring](https://console.cloud.google.com/dataflow/monitoring)\n2. In the **Predefined** drop-down, click **Customize dashboard**.\n\n3. Optional: Enter a name for the dashboard in the **Custom view name** box.\n\nIf you previously created a custom dashboard, perform the following steps to\nedit the dashboard:\n\n1. In the Google Cloud console, go to the **Dataflow**\n \\\u003e **Monitoring** page.\n\n [Go to Monitoring](https://console.cloud.google.com/dataflow/monitoring)\n2. In the **Predefined** drop-down, select the custom dashboard.\n\n3. Click edit**Edit dashboard**.\n\nAdd a metrics chart\n-------------------\n\nTo add a metrics chart to the dashboard, perform the following steps:\n\n1. Click **Add widget**.\n2. In the **Add widget** pane, select **Metric**.\n3. In the **Widget title** box, enter a title for the chart, such as `Estimated cost (top 25)`.\n4. Click code**PROMQL**\n5. Paste in the PromQL query shown [previously](#complete-query).\n6. Click **Run query**.\n7. Click **Apply**.\n8. Optional: Drag the chart to position it on the dashboard.\n\nWhat's next\n-----------\n\n- [Add a reference line](/monitoring/charts/chart-view-options#threshold-option) so that you can see when a metric exceeds a predefined threshold.\n- [Learn PromQL](/monitoring/promql#learning-promql).\n- [Learn more about dashboards](/monitoring/dashboards)."]]