[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["わかりにくい","hardToUnderstand","thumb-down"],["情報またはサンプルコードが不正確","incorrectInformationOrSampleCode","thumb-down"],["必要な情報 / サンプルがない","missingTheInformationSamplesINeed","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-08-18 UTC。"],[],[],null,["# Customize the Dataflow monitoring dashboard\n\nThis page shows how to customize the Dataflow\n[project monitoring dashboard](/dataflow/docs/guides/project-monitoring), by\nadding a graph that queries Cloud Monitoring metrics.\n\nThe project monitoring dashboard lets you monitor a collection of jobs and see\ntheir overall health and performance. The dashboard contains a default set of\ncharts that are useful for most workloads. By customizing the dashboard, you can\nadd charts that are specific to your business requirements.\n\nExample scenario\n----------------\n\nTo show how you might customize the dashboard, assume that an organization wants\nto track the estimated cost of the top 25 jobs in their project.\n\nFor this scenario, assume that the jobs have the following characteristics:\n\n- The jobs are all streaming jobs.\n- The jobs use the [data-processed billing model](/dataflow/pricing#compute-resources).\n- The jobs save files to [Persistent Disk](/compute/docs/disks/persistent-disks#disk-types), both Standard (HDD) and SSD.\n- The jobs don't use GPUs.\n\nChoose metrics\n--------------\n\nThe following metrics are needed to estimate the cost of a job, given the\nassumptions listed the previous section:\n\nFor more information, see [Dataflow pricing](/dataflow/pricing).\n| **Important:** The actual cost to run a job depends on factors such as Dataflow pricing model, [committed use discounts (CUDs)](/dataflow/docs/cuds), and contractual discounts. Dataflow job metrics can give you a baseline for cost optimization, but the actual costs might differ.\n\nTo learn which resources are available in Monitoring, see\n[Monitored resource types](/monitoring/api/resources).\n\nWrite a PromQL query\n--------------------\n\nTo query metrics from Cloud Monitoring, use\n[Prometheus Query Language (PromQL)](/monitoring/promql). This section shows how to write\na PromQL query for the scenario by building it up from smaller clauses.\nThis section doesn't assume any prior knowledge of PromQL. For more\ninformation, see [PromQL in Cloud Monitoring](/monitoring/promql).\n\n### Map metric names to PromQL strings\n\nTo use Monitoring metrics in PromQL queries, you must map\nthe metric name to a PromQL string, as follows:\n\nFor more information, see\n[Mapping Cloud Monitoring metrics to PromQL](/monitoring/promql/promql-mapping).\n\n### Build the query\n\nTo get the estimated cost, calculate the prices for each component based on\nthe most current value of each metric. The metrics are sampled every 60 seconds,\nso to get the latest value, use a 1-minute windowing function and take the\nmaximum value within each window.\n\n- To get the estimated CPU cost, first convert `job/total_vcpu_time` from\n seconds to hours. Multiply by CPU price per vCPU per hour.\n\n # ((vCPU time)[Bucket 1m] / Seconds per hour * vCPU Price)\n max_over_time(dataflow_googleapis_com:job_total_vcpu_time[1m]) / 3600 * \u003cvar translate=\"no\"\u003eCPU_PRICE\u003c/var\u003e\n\n This formula gives the estimated CPU cost for all jobs in the project. To get\n the estimated CPU cost per job, use the `sum` aggregation operator and group\n by job ID. \n\n sum(\n max_over_time(dataflow_googleapis_com:job_total_vcpu_time[1m]) / 3600 * \u003cvar translate=\"no\"\u003eCPU_PRICE\u003c/var\u003e\n ) by (job_id)\n\n- To get the estimated memory cost, convert `job/total_memory_usage_time` from\n seconds to hours. Multiply by memory price per GB per hour and group by job\n ID.\n\n #((Memory time)[Bucket 1m] / Seconds per hour) * Memory Price\n sum(\n max_over_time(dataflow_googleapis_com:job_total_memory_usage_time[1m]) / 3600 * \u003cvar translate=\"no\"\u003eMEMORY_PRICE\u003c/var\u003e\n ) by (job_id)\n\n- To get the estimated shuffle cost, convert\n `job/total_streaming_data_processed` from bytes to GB. Multiply by the price\n of data processed during shuffle per GB and group by job ID.\n\n # Shuffle Billing. Reported once every 60 seconds, measured in bytes.\n # Formula: (Shuffle Data)[Bucket 1m] / (Bytes in GB) * (Shuffle Price)\n sum(\n max_over_time(\n dataflow_googleapis_com:job_total_streaming_data_processed[1m]\n ) / 1000000000 * \u003cvar translate=\"no\"\u003eSHUFFLE_PRICE\u003c/var\u003e\n ) by (job_id)\n\n- To get the estimated Persistent Disk usage cost, indexing on the\n `storage_type` label to separate by disk type (Standard or SSD). Convert each\n value from GB-seconds to GB-hours and group by job ID.\n\n # Formula: ((Standard PD time)[Bucket 1m] / Seconds per hour) * Standard PD price\n sum(\n max_over_time(\n dataflow_googleapis_com:job_total_pd_usage_time{storage_type=\"HDD\"}[1m]\n ) / 3600 * \u003cvar translate=\"no\"\u003eSTANDARD_PD_PRICE\u003c/var\u003e\n ) by (job_id)\n\n # Formula: ((SSD PD time)[Bucket 1m] / Seconds per hour) * SSD PD price\n sum(\n max_over_time(\n dataflow_googleapis_com:job_total_pd_usage_time{storage_type=\"SSD\"}[1m]\n ) / 3600 * \u003cvar translate=\"no\"\u003eSSD_PD_PRICE\u003c/var\u003e\n ) by (job_id)\n\n- Summing the previous values gives the estimated cost per job. To get the top\n 25 jobs, use a top K filter:\n\n topk(25,\n # Sum the individual values.\n )\n\n### Write the complete query\n\nThe following shows the complete query: \n\n topk(25,\n sum(\n max_over_time(\n dataflow_googleapis_com:job_total_vcpu_time[1m]) / 3600 * \u003cvar translate=\"no\"\u003eCPU_PRICE\u003c/var\u003e\n )\n by (job_id) +\n sum(\n max_over_time(\n dataflow_googleapis_com:job_total_memory_usage_time[1m]) / 3600 * \u003cvar translate=\"no\"\u003eMEMORY_PRICE\u003c/var\u003e\n )\n by (job_id) +\n sum(\n max_over_time(\n dataflow_googleapis_com:job_total_streaming_data_processed[1m]) / 1000000000 * \u003cvar translate=\"no\"\u003eSHUFFLE_PRICE\u003c/var\u003e\n )\n by (job_id) +\n sum(\n max_over_time(\n dataflow_googleapis_com:job_total_pd_usage_time{storage_type=\"HDD\"}[1m]) / 3600 * \u003cvar translate=\"no\"\u003eSTANDARD_PD_PRICE\u003c/var\u003e\n )\n by (job_id) +\n sum(\n max_over_time(\n dataflow_googleapis_com:job_total_pd_usage_time{storage_type=\"SSD\"}[1m]) / 3600 * \u003cvar translate=\"no\"\u003eSSD_PD_PRICE\u003c/var\u003e\n )\n by (job_id)\n )\n\nReplace the following variables with values from the\n[Dataflow pricing page](/dataflow/pricing) for your region.\n\n- \u003cvar translate=\"no\"\u003eCPU_PRICE\u003c/var\u003e: CPU price, per vCPU per hour\n- \u003cvar translate=\"no\"\u003eMEMORY_PRICE\u003c/var\u003e: Memory price, per GB per hour\n- \u003cvar translate=\"no\"\u003eSHUFFLE_PRICE\u003c/var\u003e: Shuffle price, per GB\n- \u003cvar translate=\"no\"\u003eSTANDARD_PD_PRICE\u003c/var\u003e: Standard Persistent Disk price, per GB per hour\n- \u003cvar translate=\"no\"\u003eSSD_PD_PRICE\u003c/var\u003e: SSD Persistent Disk price, per GB per hour\n\nEdit the dashboard\n------------------\n\nIf you didn't previously create a custom dashboard for Dataflow\nin this project, create one as follows:\n\n1. In the Google Cloud console, go to the **Dataflow**\n \\\u003e **Monitoring** page.\n\n [Go to Monitoring](https://console.cloud.google.com/dataflow/monitoring)\n2. In the **Predefined** drop-down, click **Customize dashboard**.\n\n3. Optional: Enter a name for the dashboard in the **Custom view name** box.\n\nIf you previously created a custom dashboard, perform the following steps to\nedit the dashboard:\n\n1. In the Google Cloud console, go to the **Dataflow**\n \\\u003e **Monitoring** page.\n\n [Go to Monitoring](https://console.cloud.google.com/dataflow/monitoring)\n2. In the **Predefined** drop-down, select the custom dashboard.\n\n3. Click edit**Edit dashboard**.\n\nAdd a metrics chart\n-------------------\n\nTo add a metrics chart to the dashboard, perform the following steps:\n\n1. Click **Add widget**.\n2. In the **Add widget** pane, select **Metric**.\n3. In the **Widget title** box, enter a title for the chart, such as `Estimated cost (top 25)`.\n4. Click code**PROMQL**\n5. Paste in the PromQL query shown [previously](#complete-query).\n6. Click **Run query**.\n7. Click **Apply**.\n8. Optional: Drag the chart to position it on the dashboard.\n\nWhat's next\n-----------\n\n- [Add a reference line](/monitoring/charts/chart-view-options#threshold-option) so that you can see when a metric exceeds a predefined threshold.\n- [Learn PromQL](/monitoring/promql#learning-promql).\n- [Learn more about dashboards](/monitoring/dashboards)."]]