Profiling Dataflow pipelines with Cloud Profiler

Cloud Profiler is a statistical, low-overhead profiler that continuously gathers CPU usage and memory-allocation information from your production applications. Dataflow integration with Cloud Profiler helps you identify the parts of the pipeline code consuming the most resources.

Before you begin

Understand the concepts of Cloud Profiler and familiarize yourself with the profiler interface.

The Cloud Profiler API for your project will be enabled automatically when you visit the Profiler page for the first time. Make sure your project has enough quota.

Enable Cloud Profiler for Dataflow pipelines

Cloud Profiler is available for Dataflow pipelines written in Apache Beam SDK for Java and Python, version 2.33.0 or later. It can be enabled at pipeline start time. The amortized CPU and memory overhead is expected to be less than 1 percent for your pipelines.


To enable CPU profiling, start the pipeline with option --dataflowServiceOptions=enable_google_cloud_profiler.

To enable heap profiling, start the pipeline with options --dataflowServiceOptions=enable_google_cloud_profiler and --dataflowServiceOptions=enable_google_cloud_heap_sampling. Heap profiling requires Java 11 or higher.


Your Python pipeline is required to run with Dataflow Runner v2 in order to use Cloud Profiler.

To enable CPU profiling, start the pipeline with option --dataflow_service_options=enable_google_cloud_profiler. Heap profiling is not yet supported for Python.

If you deploy your pipelines from Dataflow templates, you can enable Cloud Profiler by specifying enable_google_cloud_profiler and enable_google_cloud_heap_sampling flags as additional experiments.


If you use a Google-provided template, you can specify the flags in the Additional experiments field on the Dataflow Create job from template page.


If you use the gcloud command-line tool to run templates, either gcloud dataflow jobs run or gcloud dataflow flex-template run, depending on the template type, you can specify the flags via the --additional-experiments option.


If you use the REST API to run templates, you may specify the flags via additionalExperiments field of the runtime environment, either RuntimeEnvironment or FlexTemplateRuntimeEnvironment, depending on the template type.

View the profiling data

If Cloud Profiler is enabled, a link to the Profiler page is shown on the job page.

profiler page link

You may also visit Profiler page to find the profiling data for your Dataflow pipeline, where the Service is your job name and the version is your job id.

profiler page


There are some common causes that might prevent your pipeline from generating the profiling data, even when you enable Cloud Profiler.

  • Your pipeline uses an older Apache Beam SDK version. You can view your pipeline's Apache Beam SDK version via the job page. You need version 2.33.0 or later in order to use Cloud Profiler. If your job is created from Dataflow templates, make sure the templates are using the supported SDK versions.

  • Your project is running out of Cloud Profiler quota. You can view the quota usage from your project's quota page. Cloud Profiler service rejects the profiling data if you've reached your quota.

The Cloud Profiler agent is installed during Dataflow worker startup. Log messages generated by Cloud Profiler are available in the log types

profiler log