[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["わかりにくい","hardToUnderstand","thumb-down"],["情報またはサンプルコードが不正確","incorrectInformationOrSampleCode","thumb-down"],["必要な情報 / サンプルがない","missingTheInformationSamplesINeed","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-09-04 UTC。"],[],[],null,["# Troubleshoot slow or stuck batch jobs\n\nThis page explains how to troubleshoot common causes of slow or stuck\nDataflow batch jobs. \n\nIf your batch job is slow or stuck, use the\n[Execution details](/dataflow/docs/concepts/execution-details) tab to find more\ninformation about the job and to identify the stage or worker that's causing a bottleneck.\n\nIdentify the root cause\n-----------------------\n\n1. Check whether the job is running into issues during worker startup. For more\n information, see\n [Error syncing pod](/dataflow/docs/guides/common-errors#error-syncing-pod).\n\n To verify the job has started processing data, look in the\n [job-message](/dataflow/docs/guides/logging#log-types) log for the following\n log entry: \n\n All workers have finished the startup processes and began to receive work requests\n\n2. To compare job performance between different jobs, make sure the volume of\n input data, worker configuration, autoscaling behavior, and\n [Dataflow Shuffle](/dataflow/docs/shuffle-for-batch) settings\n are the same.\n\n3. Check the [job-message](/dataflow/docs/guides/logging#log-types) logs for\n issues such as quota limits, stockout issues, or IP address exhaustion.\n\n4. In the [**Execution details**](/dataflow/docs/concepts/execution-details)\n tab, compare the\n [stage progress](/dataflow/docs/concepts/execution-details#progress-batch)\n to identify stages that took longer.\n\n5. Look for any stragglers in the job. For more information, see\n [Troubleshooting stragglers in batch jobs](/dataflow/docs/guides/troubleshoot-batch-stragglers).\n\n6. Check the throughput, CPU, and memory utilization metrics.\n\n7. Check the [worker logs](/dataflow/docs/guides/logging#log-types) for warnings\n and errors.\n\n - If the worker logs contain errors, view the stack trace. Investigate whether the error is caused by a bug in your code.\n - Look for Dataflow errors. See [Troubleshoot Dataflow errors](/dataflow/docs/guides/common-errors).\n - Look for [out-of-memory errors](/dataflow/docs/guides/troubleshoot-oom#find-errors), which can cause a stuck pipeline. If you see out-of-memory errors, follow the steps in [Troubleshoot Dataflow out of memory errors](/dataflow/docs/guides/troubleshoot-oom).\n - To identify a slow or stuck step, check the worker logs for `Operation ongoing` messages. View the stack trace to see where the step is spending time. For more information, see [Processing stuck or operation ongoing](/dataflow/docs/guides/common-errors#processing-stuck).\n8. [Check for hot keys](#hot-keys).\n\n9. If you aren't using\n [Dataflow Shuffle](/dataflow/docs/shuffle-for-batch), check the\n [shuffler logs](/dataflow/docs/guides/logging#log-types) for warnings and\n errors during shuffle operation. If you see an\n [RPC timeout error](/dataflow/docs/guides/troubleshoot-networking#worker-communicate-firewall-ports)\n on port 12345 or 12346, your job might be missing a firewall rule. See\n [Firewall rules for Dataflow](/dataflow/docs/guides/routes-firewall#firewall_rules).\n\n10. If Runner v2 is enabled, check the\n [harness](/dataflow/docs/guides/logging#log-types) logs for errors. For more\n information, see [Troubleshoot Runner v2](/dataflow/docs/runner-v2#debugging).\n\nIdentify stragglers\n-------------------\n\nA straggler is a work item that is slow relative to other work items in the\nstage. For information about identifying and fixing stragglers, see\n[Troubleshoot stragglers in batch jobs](/dataflow/docs/guides/troubleshoot-batch-stragglers).\n\nIdentify slow or stuck stages\n-----------------------------\n\nTo identify slow or stuck stages, use the\n[**Stage progress**](/dataflow/docs/concepts/execution-details#progress-batch) view.\nLonger bars indicate that the stage takes more time. Use this view to\nidentify the slowest stages in your pipeline.\n\nAfter you find the bottleneck stage, you can take the following steps:\n\n- Identify the [lagging worker](#lagging-worker) within that stage.\n- If there are no lagging workers, identify the slowest step by using the [**Stage info**](/dataflow/docs/concepts/execution-details#stage-info) panel. Use this information to identify candidates for user code optimization.\n- To find parallelism bottlenecks, use [Dataflow monitoring metrics](#debug-tools).\n\nIdentify a lagging worker\n-------------------------\n\nTo identify a lagging worker for a specific stage, use the\n[**Worker progress**](/dataflow/docs/concepts/execution-details#worker-progress)\nview. This view shows whether all workers are processing work until the end of the stage,\nor if a single worker is stuck on a lagging task. If you find a lagging worker,\ntake the following steps:\n\n- View the log files for that worker. For more information, see [Monitor and view pipeline logs](/dataflow/docs/guides/logging#MonitoringLogs).\n- View the [CPU utilization metrics](/dataflow/docs/guides/using-monitoring-intf#cpu-use) and the [worker progress](/dataflow/docs/concepts/execution-details#worker-progress) details for lagging workers. If you see unusually high or low CPU utilization, in the log files for that worker, look for the following issues:\n - [`A hot key ... was detected`](/dataflow/docs/guides/common-errors#hot-key-detected)\n - [`Processing stuck ... Operation ongoing`](/dataflow/docs/guides/common-errors#processing-stuck)\n\nTools for debugging\n-------------------\n\nWhen you have a slow or stuck pipeline, the following tools can help you\ndiagnose the problem.\n\n- To correlate incidents and identify bottlenecks, use [Cloud Monitoring for Dataflow](/dataflow/docs/guides/using-cloud-monitoring).\n- To monitor pipeline performance, use [Cloud Profiler](/dataflow/docs/guides/profiling-a-pipeline).\n- Some transforms are better suited to high-volume pipelines than others. Log messages can [identify a stuck user transform](/dataflow/docs/guides/common-errors#processing-stuck) in either batch or streaming pipelines.\n- To learn more about a stuck job, use [Dataflow job metrics](/dataflow/docs/guides/using-monitoring-intf). The following list includes useful metrics:\n - The [Backlog bytes](/dataflow/docs/guides/using-monitoring-intf#backlog) metric (`backlog_bytes`) measures the amount of unprocessed input in bytes by stage. Use this metric to find a fused step that has no throughput. Similarly, the backlog elements metric (`backlog_elements`) measures the number of unprocessed input elements for a stage.\n - The [Processing parallelism keys](/dataflow/docs/guides/using-monitoring-intf#parallelism) (`processing_parallelism_keys`) metric measures the number of parallel processing keys for a particular stage of the pipeline over the last five minutes. Use this metric to investigate in the following ways:\n - Narrow the issue down to specific stages and confirm hot key warnings, such as [`A hot key ... was detected`](/dataflow/docs/guides/common-errors#hot-key-detected).\n - Find throughput bottlenecks caused by insufficient parallelism. These bottlenecks can result in slow or stuck pipelines.\n - The [System lag](/dataflow/docs/guides/using-monitoring-intf#system_latency_streaming) metric (`system_lag`) and the per-stage system lag metric (`per_stage_system_lag`) measure the maximum amount of time an item of data has been processing or awaiting processing. Use these metrics to identify inefficient stages and bottlenecks from data sources.\n\nFor additional metrics that aren't included in the Dataflow\nmonitoring web interface, see the complete list of Dataflow metrics in\n[Google Cloud metrics](/monitoring/api/metrics_gcp_d_h#gcp-dataflow)."]]