Troubleshooting Your Pipeline

This section is a compendium of troubleshooting tips and debugging strategies that you might find helpful if you're having trouble building or running your Dataflow pipeline. This information can help you detect a pipeline failure, determine the reason behind a failed pipeline run, and suggest some courses of action to correct the problem.

Dataflow provides real-time feedback on your job, and there is a basic set of steps you can use to check the error messages, logs, and for conditions such as your job's progress having stalled.

This section also contains a catalog of common errors you might encounter when running your Dataflow pipeline, and suggests some corrective actions and workarounds for each.

Checking Your Pipeline's Status

You can detect any errors in your pipeline runs by using the Dataflow Monitoring Interface.

  1. Go to the Google Cloud Platform Console.
  2. Select your Cloud Platform project from the project list.
  3. Click the menu in the upper left corner.
  4. Navigate to the Big Data section and click Dataflow. A list of running jobs appears in the right-hand pane.
  5. Select the pipeline job you want to view. You can see the jobs' status at a glance in the Status field: "Running," "Succeeded," or "Failed."
Figure 1: A list of Dataflow jobs in the Developer Console with jobs in the running, succeeded, and failed states.

Basic Troubleshooting Workflow

If one of your pipeline jobs has failed, you can select the job to view more detailed information on errors and run results. When you select a job, you can view the execution graph as well as some information about the job on the Summary page to the right of the graph. The top of the page contains a button to view logs, as well as indicators if the job generated errors or warnings during execution.

Figure 2: A Dataflow Job Summary with errors indicated.

Checking Job Error Messages

You can click the Logs button to view log messages generated by your pipeline code and the Dataflow Service. Filter the messages that appear in the logs panel by using the Minimum Severity drop-down menu. Select the Error filter to displays error messages only.

Click the triangle icon next to each error message to expand it.

Figure 3: A list of Dataflow Job Error messages, with one message expanded.

Viewing Step Logs for Your Job

When you select a step in your pipeline graph, the logs panel toggles from displaying Job Logs generated by the Dataflow service, to showing logs from the Compute Engine instances running your pipeline step.

Figure 4: The Cloud Logging button in the Dataflow Job Summary.

Google Cloud Logging amalgamates all of the collected logs from your project's Compute Engine instances in one location. See Logging Pipeline Messages for more information on using Dataflow's various logging capabilities.

Handling Automated Pipeline Rejection

In some cases, the Dataflow service identifies that your pipeline might trigger known SDK issues. To prevent pipelines that will likely encounter issues, from being submitted, Dataflow will automatically reject your pipeline and display the following message:

The workflow was automatically rejected by the service because it may trigger an identified bug in the
SDK (details below). If you think this identification is in error, and would like to override this
automated rejection, please re-submit this workflow with the following override flag: ${override-flag}.
Bug-details: ${bug-details-and-explanation}.
Contact dataflow-feedback@google.com for further help. Please use this identifier in your communication: ${bug-id}."

After reading the caveats in the linked bug details, if you want to try to run your pipeline anyway, you can override the automated rejection. Add the flag --experiments=<override-flag> and resubmit your pipeline.

Determining the Cause of a Pipeline Failure

Typically, a failed Dataflow pipeline run can be attributed to one of the following causes:

  • Graph or pipeline construction errors. These errors occur when Dataflow runs into a problem building the graph of steps that compose your pipeline, as described by your Dataflow program.
  • Errors in job validation. The Dataflow service validates any pipeline job you launch; errors in the validation process can prevent your job from being successfully created or executed. Validation errors can include problems with your Cloud Platform project's Cloud Storage bucket, or with your project's permissions.
  • Exceptions in worker code. These errors occur when there are errors or bugs in the user-provided code that Dataflow distributes to parallel workers, such as the DoFn instances of a ParDo transform.
  • Slow-running Pipelines or Lack of Output. If your pipeline runs slowly or runs for a long period of time without reporting results, you might check your quotas for streaming data sources and sinks such as Pub/Sub. There are also certain transforms that are better-suited to high-volume streaming pipelines than others.
  • Errors caused by transient failures in other Cloud Platform services. Your pipeline may fail because of a temporary outage or other problem in the Cloud Platform services upon which Dataflow depends, such as Compute Engine or Cloud Storage.

Detecting Graph or Pipeline Construction Errors

A graph construction error can occur when Dataflow is building the execution graph for your pipeline from the code in your Dataflow program. During graph construction time, Dataflow checks for illegal operations.

If Dataflow detects an error in graph construction, keep in mind that no job will be created on the Dataflow service, and thus you won't see any feedback in the Dataflow Monitoring Interface. Instead, you'll see an error message similar to the following in the console or terminal window where you ran your Dataflow program:

Java: SDK 1.x

For example, if your pipeline attempts to perform an aggregation like GroupByKey on a globally-windowed, non-triggered, unbounded PCollection, you'll see an error message similar to the following:

...
... Exception in thread "main" java.lang.IllegalStateException:
... GroupByKey cannot be applied to non-bounded PCollection in the GlobalWindow without a trigger.
... Use a Window.into or Window.triggering transform prior to GroupByKey
...

Java: SDK 2.x

For example, if your pipeline attempts to perform an aggregation like GroupByKey on a globally-windowed, non-triggered, unbounded PCollection, you'll see an error message similar to the following:

...
... Exception in thread "main" java.lang.IllegalStateException:
... GroupByKey cannot be applied to non-bounded PCollection in the GlobalWindow without a trigger.
... Use a Window.into or Window.triggering transform prior to GroupByKey
...

Python

For example, if your pipeline uses type hints and the argument type in one of the transforms is not as expected, you'll see an error message similar to the following:

... in <module> run()
... in run | beam.Map('count', lambda (word, ones): (word, sum(ones))))
... in __or__ return self.pipeline.apply(ptransform, self)
... in apply transform.type_check_inputs(pvalueish)
... in type_check_inputs self.type_check_inputs_or_outputs(pvalueish, 'input')
... in type_check_inputs_or_outputs pvalue_.element_type))
google.cloud.dataflow.typehints.decorators.TypeCheckError: Input type hint violation at group: expected Tuple[TypeVariable[K], TypeVariable[V]], got <type 'str'>

Should you encounter such an error, check your pipeline code to ensure that your pipeline's operations are legal.

Detecting Errors in Dataflow Job Validation

Once the Dataflow service has received your pipeline's graph, the service will attempt to validate your job. This includes making sure the service can access your job's associated Cloud Storage buckets for file staging and temporary output, checking for the required permissions in your Cloud Platform project, and making sure the service can access input and output sources (like files).

If your job fails the validation process, you'll see an error message in the Dataflow Monitoring Interface, as well as in your console or terminal window if you are using blocking execution. The error message will look similar to the following:

Java: SDK 1.x

INFO: To access the Dataflow monitoring console, please navigate to
  https://console.developers.google.com/project/google.com%3Aclouddfe/dataflow/job/2016-03-08_18_59_25-16868399470801620798
Submitted job: 2016-03-08_18_59_25-16868399470801620798
...
... Starting 3 workers...
... Executing operation BigQuery-Read+AnonymousParDo+BigQuery-Write
... Executing BigQuery import job "dataflow_job_16868399470801619475".
... Stopping worker pool...
... Workflow failed. Causes: ...BigQuery-Read+AnonymousParDo+BigQuery-Write failed.
Causes: ... BigQuery getting table "non_existent_table" from dataset "cws_demo" in project "x" failed.
Message: Not found: Table x:cws_demo.non_existent_table HTTP Code: 404
... Worker pool stopped.
... com.google.cloud.dataflow.sdk.runners.BlockingDataflowPipelineRunner run
INFO: Job finished with status FAILED
Exception in thread "main" com.google.cloud.dataflow.sdk.runners.DataflowJobExecutionException:
  Job 2016-03-08_18_59_25-16868399470801620798 failed with status FAILED
	at com.google.cloud.dataflow.sdk.runners.BlockingDataflowPipelineRunner.run(BlockingDataflowPipelineRunner.java:155)
	at com.google.cloud.dataflow.sdk.runners.BlockingDataflowPipelineRunner.run(BlockingDataflowPipelineRunner.java:56)
	at com.google.cloud.dataflow.sdk.Pipeline.run(Pipeline.java:180)
	at com.google.cloud.dataflow.integration.BigQueryCopyTableExample.main(BigQueryCopyTableExample.java:74)

Java: SDK 2.x

INFO: To access the Dataflow monitoring console, please navigate to
  https://console.developers.google.com/project/google.com%3Aclouddfe/dataflow/job/2016-03-08_18_59_25-16868399470801620798
Submitted job: 2016-03-08_18_59_25-16868399470801620798
...
... Starting 3 workers...
... Executing operation BigQuery-Read+AnonymousParDo+BigQuery-Write
... Executing BigQuery import job "dataflow_job_16868399470801619475".
... Stopping worker pool...
... Workflow failed. Causes: ...BigQuery-Read+AnonymousParDo+BigQuery-Write failed.
Causes: ... BigQuery getting table "non_existent_table" from dataset "cws_demo" in project "my_project" failed.
Message: Not found: Table x:cws_demo.non_existent_table HTTP Code: 404
... Worker pool stopped.
... com.google.cloud.dataflow.sdk.runners.BlockingDataflowPipelineRunner run
INFO: Job finished with status FAILED
Exception in thread "main" com.google.cloud.dataflow.sdk.runners.DataflowJobExecutionException:
  Job 2016-03-08_18_59_25-16868399470801620798 failed with status FAILED
	at com.google.cloud.dataflow.sdk.runners.DataflowRunner.run(DataflowRunner.java:155)
	at com.google.cloud.dataflow.sdk.runners.DataflowRunner.run(DataflowRunner.java:56)
	at com.google.cloud.dataflow.sdk.Pipeline.run(Pipeline.java:180)
	at com.google.cloud.dataflow.integration.BigQueryCopyTableExample.main(BigQueryCopyTableExample.java:74)

Python

INFO:root:Created job with id: [2016-03-08_14_12_01-2117248033993412477]
... Checking required Cloud APIs are enabled.
... Job 2016-03-08_14_12_01-2117248033993412477 is in state JOB_STATE_RUNNING.
... Combiner lifting skipped for step group: GroupByKey not followed by a combiner.
... Expanding GroupByKey operations into optimizable parts.
... Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
... Annotating graph with Autotuner information.
... Fusing adjacent ParDo, Read, Write, and Flatten operations
... Fusing consumer split into read
...
... Starting 1 workers...
...
... Executing operation read+split+pair_with_one+group/Reify+group/Write
... Executing failure step failure14
... Workflow failed.
Causes: ... read+split+pair_with_one+group/Reify+group/Write failed.
Causes: ... Unable to view metadata for files: gs://dataflow-samples/shakespeare/missing.txt.
... Cleaning up.
... Tearing down pending resources...
INFO:root:Job 2016-03-08_14_12_01-2117248033993412477 is in state JOB_STATE_FAILED.

Detecting an Exception in Worker Code

While your job is running, you may encounter errors or exceptions in the your worker code. This generally means that the DoFns in your pipeline code have generated unhandled exceptions, which result in failed tasks in your Dataflow job.

Exceptions in user code (for example, your DoFn instances) are reported in the Dataflow Monitoring Interface. If you run your pipeline with blocking execution, you'll also see error messages printed in your console or terminal window, such as the following:

Java: SDK 1.x

INFO: To access the Dataflow monitoring console, please navigate to https://console.developers.google.com/project/google.com%3Aclouddfe/dataflow/job/2016-03-08_19_09_07-6448127003704955959
Submitted job: 2016-03-08_19_09_07-6448127003704955959
...
... Expanding GroupByKey operations into optimizable parts.
... Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
... Annotating graph with Autotuner information.
... Fusing adjacent ParDo, Read, Write, and Flatten operations
...
... Starting 1 workers...
... Executing operation TextIO.Write/DataflowPipelineRunner.BatchTextIOWrite/DataflowPipelineRunner.ReshardForWrite/GroupByKey/Create
... Executing operation AnonymousParDo+TextIO.Write/DataflowPipelineRunner.BatchTextIOWrite/DataflowPipelineRunner.ReshardForWrite/Window.Into()+TextIO.Write/DataflowPipelineRunner.BatchTextIOWrite/DataflowPipelineRunner.ReshardForWrite/RandomKey+TextIO.Write/DataflowPipelineRunner.BatchTextIOWrite/DataflowPipelineRunner.ReshardForWrite/GroupByKey/Reify+TextIO.Write/DataflowPipelineRunner.BatchTextIOWrite/DataflowPipelineRunner.ReshardForWrite/GroupByKey/Write
... Workers have started successfully.
... java.lang.ArithmeticException: / by zero
	at com.google.cloud.dataflow.integration.WorkerCrashRecovery.crash(WorkerCrashRecovery.java:166)
	at com.google.cloud.dataflow.integration.WorkerCrashRecovery.access$200(WorkerCrashRecovery.java:51)
	at com.google.cloud.dataflow.integration.WorkerCrashRecovery$1.processElement(WorkerCrashRecovery.java:199)
... Executing operation TextIO.Write/DataflowPipelineRunner.BatchTextIOWrite/DataflowPipelineRunner.ReshardForWrite/GroupByKey/Close
... Executing operation TextIO.Write/DataflowPipelineRunner.BatchTextIOWrite/DataflowPipelineRunner.ReshardForWrite/GroupByKey/Read+TextIO.Write/DataflowPipelineRunner.BatchTextIOWrite/DataflowPipelineRunner.ReshardForWrite/GroupByKey/GroupByWindow+TextIO.Write/DataflowPipelineRunner.BatchTextIOWrite/DataflowPipelineRunner.ReshardForWrite/Ungroup+TextIO.Write/DataflowPipelineRunner.BatchTextIOWrite/DataflowPipelineRunner.BatchTextIONativeWrite
... Stopping worker pool...
... Worker pool stopped.
... Cleaning up.

Note: The Dataflow service retries failed tasks up to 4 times in batch mode, and an unlimited number of times in streaming mode. In batch mode, your job will fail; in streaming, it may stall indefinitely.

Java: SDK 2.x

INFO: To access the Dataflow monitoring console, please navigate to https://console.developers.google.com/project/example_project/dataflow/job/2017-05-23_14_02_46-1117850763061203461
Submitted job: 2017-05-23_14_02_46-1117850763061203461
...
... To cancel the job using the 'gcloud' tool, run: gcloud beta dataflow jobs --project=example_project cancel 2017-05-23_14_02_46-1117850763061203461
... Autoscaling is enabled for job 2017-05-23_14_02_46-1117850763061203461.
... The number of workers will be between 1 and 15.
... Autoscaling was automatically enabled for job 2017-05-23_14_02_46-1117850763061203461.
...
... Executing operation BigQueryIO.Write/BatchLoads/Create/Read(CreateSource)+BigQueryIO.Write/BatchLoads/GetTempFilePrefix+BigQueryIO.Write/BatchLoads/TempFilePrefixView/BatchViewOverrides.GroupByWindowHashAsKeyAndWindowAsSortKey/ParDo(UseWindowHashAsKeyAndWindowAsSortKey)+BigQueryIO.Write/BatchLoads/TempFilePrefixView/Combine.GloballyAsSingletonView/Combine.globally(Singleton)/WithKeys/AddKeys/Map/ParMultiDo(Anonymous)+BigQueryIO.Write/BatchLoads/TempFilePrefixView/Combine.GloballyAsSingletonView/Combine.globally(Singleton)/Combine.perKey(Singleton)/GroupByKey/Reify+BigQueryIO.Write/BatchLoads/TempFilePrefixView/Combine.GloballyAsSingletonView/Combine.globally(Singleton)/Combine.perKey(Singleton)/GroupByKey/Write+BigQueryIO.Write/BatchLoads/TempFilePrefixView/BatchViewOverrides.GroupByWindowHashAsKeyAndWindowAsSortKey/BatchViewOverrides.GroupByKeyAndSortValuesOnly/Write
... Workers have started successfully.
...
... org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process SEVERE: 2017-05-23T21:06:33.711Z: (c14bab21d699a182): java.lang.RuntimeException: org.apache.beam.sdk.util.UserCodeException: java.lang.ArithmeticException: / by zero
        at com.google.cloud.dataflow.worker.runners.worker.GroupAlsoByWindowsParDoFn$1.output(GroupAlsoByWindowsParDoFn.java:146)
        at com.google.cloud.dataflow.worker.runners.worker.GroupAlsoByWindowFnRunner$1.outputWindowedValue(GroupAlsoByWindowFnRunner.java:104)
        at com.google.cloud.dataflow.worker.util.BatchGroupAlsoByWindowAndCombineFn.closeWindow(BatchGroupAlsoByWindowAndCombineFn.java:191)
...
... Cleaning up.
... Stopping worker pool...
... Worker pool stopped.

Note: The Dataflow service retries failed tasks up to 4 times in batch mode, and an unlimited number of times in streaming mode. In batch mode, your job will fail; in streaming, it may stall indefinitely.

Python

INFO:root:Job 2016-03-08_14_21_32-8974754969325215880 is in state JOB_STATE_RUNNING.
...
INFO:root:... Expanding GroupByKey operations into optimizable parts.
INFO:root:... Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
INFO:root:... Annotating graph with Autotuner information.
INFO:root:... Fusing adjacent ParDo, Read, Write, and Flatten operations
...
INFO:root:...: Starting 1 workers...
INFO:root:...: Executing operation group/Create
INFO:root:...: Value "group/Session" materialized.
INFO:root:...: Executing operation read+split+pair_with_one+group/Reify+group/Write
INFO:root:Job 2016-03-08_14_21_32-8974754969325215880 is in state JOB_STATE_RUNNING.
INFO:root:...: ...: Workers have started successfully.
INFO:root:Job 2016-03-08_14_21_32-8974754969325215880 is in state JOB_STATE_RUNNING.
INFO:root:...: Traceback (most recent call last):
  File ".../dataflow_worker/batchworker.py", line 384, in do_work self.current_executor.execute(work_item.map_task)
  ...
  File ".../apache_beam/examples/wordcount.runfiles/google3/third_party/py/apache_beam/examples/wordcount.py", line 73, in <lambda>
ValueError: invalid literal for int() with base 10: 'www'

Note: The Dataflow service retries failed tasks up to 4 times.

Consider guarding against errors in your code by adding exception handlers. For example, if you'd like to drop elements that fail some custom input validation done in a ParDo, handle the exception within your DoFn and drop the element. You can also track failing elements in a few different ways:

Java: SDK 1.x

  • You can log the failing elements and check the output using Cloud Logging.
  • You can have your ParDo write the failing elements to a side output for later inspection.

Java: SDK 2.x

  • You can use the Metrics class to track the properties of an executing pipeline, as shown in the following example:
    final Counter counter = Metrics.counter("stats", "even-items");
    PCollection<Integer> input = pipeline.apply(...);
    ...
    input.apply(ParDo.of(new DoFn<Integer, Integer>() {
      @ProcessElement
      public void processElement(ProcessContext c) {
        if (c.element() % 2 == 0) {
          counter.inc();
        }
    });
    
  • You can log the failing elements and check the output using Cloud Logging.
  • You can have your ParDo write the failing elements to a side output for later inspection.

Python

  • You can use the Metrics class to track the properties of an executing pipeline, as shown in the following example:
    class FilterTextFn(beam.DoFn):
          """A DoFn that filters for a specific key based on a regex."""
    
          def __init__(self, pattern):
            self.pattern = pattern
            # A custom metric can track values in your pipeline as it runs. Create
            # custom metrics to count unmatched words, and know the distribution of
            # word lengths in the input PCollection.
            self.word_len_dist = Metrics.distribution(self.__class__,
                                                      'word_len_dist')
            self.unmatched_words = Metrics.counter(self.__class__,
                                                   'unmatched_words')
    
          def process(self, element):
            word = element
            self.word_len_dist.update(len(word))
            if re.match(self.pattern, word):
              yield element
            else:
              self.unmatched_words.inc()
    
        filtered_words = (
            words | 'FilterText' >> beam.ParDo(FilterTextFn('s.*')))
    
  • You can log the failing elements and check the output using Cloud Logging.
  • You can have your ParDo write the failing elements to a side output for later inspection.

Troubleshooting Slow-Running Pipelines or Lack of Output

Java: SDK 1.x

If you have a high-volume streaming pipeline that is running slowly or stalled, there are a few things you can check:

Pub/Sub Quota

If your pipeline reads input from Google Cloud Pub/Sub, your Cloud Platform project may have insufficient Pub/Sub quota. One indication of this is if your job is generating a high number of 429 (Rate Limit Exceeded) errors. Try the following steps to check for such errors:

  1. Go to the Google Cloud Platform Console.
  2. In the left-hand navigation pane, click APIs & services.
  3. In the Search Box, search for Google Cloud Pub/Sub.
  4. Click the Usage tab.
  5. Check Response Codes and look for (4xx) client error codes.

Use .withFanout In Your Combine Transforms

If your pipeline processes high-volume unbounded PCollections, we recommend:

  • Use Combine.Globally.withFanout instead of Combine.Globally.
  • Use Combine.PerKey.withHotKeyFanout instead of Count.PerKey.

Java: SDK 2.x

If you have a high-volume streaming pipeline that is running slowly or stalled, there are a few things you can check:

Pub/Sub Quota

If your pipeline reads input from Google Cloud Pub/Sub, your Cloud Platform project may have insufficient Pub/Sub quota. One indication of this is if your job is generating a high number of 429 (Rate Limit Exceeded) errors. Try the following steps to check for such errors:

  1. Go to the Google Cloud Platform Console.
  2. In the left-hand navigation pane, click APIs & services.
  3. In the Search Box, search for Google Cloud Pub/Sub.
  4. Click the Usage tab.
  5. Check Response Codes and look for (4xx) client error codes.

Use .withFanout In Your Combine Transforms

If your pipeline processes high-volume unbounded PCollections, we recommend:

  • Use Combine.Globally.withFanout instead of Combine.Globally.
  • Use Combine.PerKey.withHotKeyFanout instead of Count.PerKey.

Python

This feature is not yet supported in the Dataflow SDK for Python.

Common Errors and Courses of Action

This section describes some common errors you might encounter when running your Dataflow job, and suggests some courses of action for correcting or otherwise dealing with those errors.

RPC timed out Exceptions, DEADLINE_EXCEEDED Exceptions, or Server Unresponsive Errors

If you encounter RPC timeouts, DEADLINE_EXCEEDED exceptions, or Server Unresponsive errors while your job runs, these typically indicate one of two problems:

  • The Google Compute Engine network used for your job may be missing a firewall rule. The firewall rule needs to enable all TCP traffic among the VMs in the project. The network might be either the network named default or a network you specified in your pipeline options.
  • Your job is shuffle-bound. Consider one of, or a combination of, the following courses of action:

    Java: SDK 1.x

    • Add more workers. Try setting --numWorkers with a higher value when you run your pipeline.
    • Increase the size of the attached disk for workers. Try setting --diskSizeGb with a higher value when you run your pipeline.
    • Use an SSD-backed persistent disk. Try setting --workerDiskType="compute.googleapis.com/projects//zones//diskTypes/pd-ssd" when you run your pipeline.

    Java: SDK 2.x

    • Add more workers. Try setting --numWorkers with a higher value when you run your pipeline.
    • Increase the size of the attached disk for workers. Try setting --diskSizeGb with a higher value when you run your pipeline.
    • Use an SSD-backed persistent disk. Try setting --workerDiskType="compute.googleapis.com/projects//zones//diskTypes/pd-ssd" when you run your pipeline.

    Python

    • Add more workers. Try setting --num_workers with a higher value when you run your pipeline.
    • Increase the size of the attached disk for workers. Try setting --disk_size_gb with a higher value when you run your pipeline.
    • Use an SSD-backed persistent disk. Try setting --worker_disk_type="compute.googleapis.com/projects//zones//diskTypes/pd-ssd" when you run your pipeline.

Disk Space Errors such as RESOURCE_EXHAUSTED: IO error: No space left on disk

These errors usually indicate that you have allocated insufficient local disk space to process your job. If you are running your job with default settings, your job is running on 3 workers, each with 250 GB of local disk space, and with no auto-scaling. Consider modifying the default settings to increase the number of workers available to your job, to increase the default disk size per worker, or to enable auto-scaling.

413 Request Entity Too Large / "The size of serialized JSON representation of the pipeline exceeds the allowable limit"

If you encounter an error about the JSON payload when submitting your job, it means your pipeline's JSON representation exceeds the maximum 10MB request size. These errors might appear as one of the following messages in your console or terminal window:

  • 413 Request Entity Too Large
  • "The size of serialized JSON representation of the pipeline exceeds the allowable limit"
  • "Failed to create a workflow job: Invalid JSON payload received"
  • "Failed to create a workflow job: Request payload exceeds the allowable limit"

The size of your job is specifically tied to the JSON representation of the pipeline; a larger pipeline means a larger request. Dataflow currently has a limitation that caps requests at 10MB.

To estimate the size of your pipeline's JSON request, run your pipeline with the following option:

Java: SDK 1.x

--dataflowJobFile=< path to output file >

Java: SDK 2.x

--dataflowJobFile=< path to output file >

Python

--dataflow_job_file=< path to output file >

This command writes a JSON representation of your job to a file. The size of the serialized file is a good estimate of the size of the request; the actual size will be slightly larger due to some additional information included the request.

Certain conditions in your pipeline can cause the JSON representation to exceed the limit. Common conditions include:

  • A Create transform that includes a large amount of in-memory data.
  • A large DoFn instance that is serialized for transmission to remote workers.
  • A DoFn as an anonymous inner class instance that (possibly inadvertently) pulls in a large amount of data to be serialized.

Consider restructuring your pipeline to avoid these conditions.

"Total number of BoundedSource objects generated by splitIntoBundles() operation is larger than the allowable limit" or "Total size of the BoundedSource objects generated by splitIntoBundles() operation is larger than the allowable limit".

Java: SDK 1.x

You might encounter this error if you're reading from a very large number of files via TextIO, AvroIO or some other file-based source. The particular limit depends on the details of your source (e.g. embedding schema in AvroIO.Read will allow fewer files), but it is on the order of tens of thousands of files in one pipeline.

You might also encounter this error if you've created a custom data source for your pipeline and your source's splitIntoBundles method returned a list of BoundedSource objects which takes up more than 20MB when serialized.

The allowable limit for the total size of the BoundedSource objects generated by your custom source's splitIntoBundles() operation is 20MB. You can work around this limitation by modifying your custom BoundedSource subclass so that the total size of the generated BoundedSource objects is smaller than the 20MB limit. For example, your source might generate fewer splits initially, and rely on Dynamic Work Rebalancing to further split inputs on demand.

Java: SDK 2.x

You might encounter this error if you're reading from a very large number of files via TextIO, AvroIO or some other file-based source. The particular limit depends on the details of your source (e.g. embedding schema in AvroIO.Read will allow fewer files), but it is on the order of tens of thousands of files in one pipeline.

You might also encounter this error if you've created a custom data source for your pipeline and your source's splitIntoBundles method returned a list of BoundedSource objects which takes up more than 20MB when serialized.

The allowable limit for the total size of the BoundedSource objects generated by your custom source's splitIntoBundles() operation is 20MB. You can work around this limitation by modifying your custom BoundedSource subclass so that the total size of the generated BoundedSource objects is smaller than the 20MB limit. For example, your source might generate fewer splits initially, and rely on Dynamic Work Rebalancing to further split inputs on demand.

Jobs that used to run, now fail with "Staged package...is inaccessible"

Verify that the Cloud Storage bucket used for staging does not have TTL settings that cause staged packages to be deleted.

Encoding errors, IOExceptions, or unexpected behavior in user code.

The Dataflow service's SDKs and worker take dependencies on common third-party components, which themselves import various dependencies. Version collisions can result in unexpected behavior in the service. If you are using any of these packages in your code, be aware that some libraries are not forward-compatible and you may need to pin to the listed versions that will be in scope during execution. In order to determine whether your JAR has a conflicting version in use, consider inspecting the dependency tree of your project. You can generate the dependency tree with various tools, such as maven.
Avoid specifying "latest" in your pom.xml for the libraries in the following table.

Java: SDK 1.x

SDK 1.9.1 dependencies

GroupId ArtifactId Version
com.fasterxml.jackson.corejackson-annotations2.7.0
com.fasterxml.jackson.corejackson-core2.7.0
com.fasterxml.jackson.corejackson-databind2.7.0
com.google.api-clientgoogle-api-client-jackson21.22.0
com.google.api-clientgoogle-api-client-java61.22.0
com.google.api-clientgoogle-api-client1.22.0
com.google.api.grpcgrpc-core-proto0.0.3
com.google.api.grpcgrpc-pubsub-v10.0.2
com.google.apisgoogle-api-services-bigqueryv2-rev295-1.22.0
com.google.apisgoogle-api-services-clouddebuggerv2-rev8-1.22.0
com.google.apisgoogle-api-services-dataflowv1b3-rev36-1.22.0
com.google.apisgoogle-api-services-datastore-protobufv1beta2-rev1-4.0.0
com.google.apisgoogle-api-services-pubsubv1-rev10-1.22.0
com.google.apisgoogle-api-services-storagev1-rev71-1.22.0
com.google.authgoogle-auth-library-credentials0.4.0
com.google.authgoogle-auth-library-oauth2-http0.4.0
com.google.auto.serviceauto-service1.0-rc2
com.google.autoauto-common0.3
com.google.cloud.bigdataossgcsio1.4.5
com.google.cloud.bigdataossutil1.4.5
com.google.cloud.bigtablebigtable-protos0.3.0
com.google.cloud.datastoredatastore-v1-proto-client1.1.0
com.google.cloud.datastoredatastore-v1-protos1.0.1
com.google.code.findbugsjsr3053.0.1
com.google.http-clientgoogle-http-client-jackson1.22.0
com.google.http-clientgoogle-http-client-jackson21.22.0
com.google.http-clientgoogle-http-client-protobuf1.22.0
com.google.oauth-clientgoogle-oauth-client-java61.22.0
com.google.oauth-clientgoogle-oauth-client1.22.0
com.google.protobuf.nanoprotobuf-javanano3.0.0-alpha-5
com.google.protobufprotobuf-java3.0.0-beta-1
com.squareup.okhttpokhttp2.5.0
com.squareup.okiookio1.6.0
com.thoughtworks.paranamerparanamer2.3
commons-codeccommons-codec1.3
commons-loggingcommons-logging1.1.1
io.grpcgrpc-all0.13.1
io.grpcgrpc-auth0.13.1
io.grpcgrpc-core0.13.1
io.grpcgrpc-netty0.13.1
io.grpcgrpc-okhttp0.13.1
io.grpcgrpc-protobuf-nano0.13.1
io.grpcgrpc-protobuf0.13.1
io.grpcgrpc-stub0.13.1
io.nettynetty-buffer4.1.0.CR1
io.nettynetty-codec-http4.1.0.CR1
io.nettynetty-codec-http24.1.0.CR1
io.nettynetty-codec4.1.0.CR1
io.nettynetty-common4.1.0.CR1
io.nettynetty-handler4.1.0.CR1
io.nettynetty-resolver4.1.0.CR1
io.nettynetty-transport4.1.0.CR1
joda-timejoda-time2.4
org.apache.avroavro1.7.7
org.apache.commonscommons-compress1.9
org.apache.httpcomponentshttpclient4.0.1
org.apache.httpcomponentshttpcore4.0.1
org.codehaus.jacksonjackson-core-asl1.9.13
org.codehaus.jacksonjackson-mapper-asl1.9.13
org.codehaus.woodstoxstax2-api3.1.4
org.codehaus.woodstoxwoodstox-core-asl4.4.1
org.slf4jslf4j-api1.7.14
org.tukaanixz1.5
org.xerial.snappysnappy-java1.1.2.1
Additionally, the Dataflow Worker starts with the following logging JARs prepended to the classpath:
GroupId ArtifactId Version
org.apache.logging.log4jlog4j-to-slf4j2.5
org.slf4jjcl-over-slf4j1.7.14
org.slf4jlog4j-over-slf4j1.7.14
org.slf4jslf4j-api1.7.14
org.slf4jslf4j-jdk141.7.14

Java: SDK 2.x

SDK 2.1.0 dependencies

GroupId ArtifactId Version
com.fasterxml.jackson.corejackson-annotations2.8.8
com.fasterxml.jackson.corejackson-core2.8.8
com.fasterxml.jackson.corejackson-databind2.8.8
com.google.api-clientgoogle-api-client1.22.0
com.google.api-clientgoogle-api-client-jackson21.20.0
com.google.api-clientgoogle-api-client-java61.20.0
com.google.api.grpcgrpc-google-cloud-spanner-admin-database-v10.1.11
com.google.api.grpcgrpc-google-cloud-spanner-admin-instance-v10.1.11
com.google.api.grpcgrpc-google-cloud-spanner-v10.1.11
com.google.api.grpcgrpc-google-common-protos0.1.0
com.google.api.grpcgrpc-google-iam-v10.1.0
com.google.api.grpcgrpc-google-longrunning-v10.1.11
com.google.api.grpcgrpc-google-pubsub-v10.1.0
com.google.api.grpcproto-google-cloud-spanner-admin-database-v10.1.9
com.google.api.grpcproto-google-cloud-spanner-admin-instance-v10.1.11
com.google.api.grpcproto-google-cloud-spanner-v10.1.11
com.google.api.grpcproto-google-common-protos0.1.9
com.google.api.grpcproto-google-iam-v10.1.11
com.google.api.grpcproto-google-longrunning-v10.1.11
com.google.apisgoogle-api-services-bigqueryv2-rev295-1.22.0
com.google.apisgoogle-api-services-clouddebuggerv2-rev8-1.22.0
com.google.apisgoogle-api-services-cloudresourcemanagerv1-rev6-1.22.0
com.google.apisgoogle-api-services-dataflowv1b3-rev196-1.22.0
com.google.apisgoogle-api-services-pubsubv1-rev10-1.22.0
com.google.apisgoogle-api-services-storagev1-rev71-1.22.0
com.google.apiapi-common1.1.0
com.google.apigax1.3.1
com.google.apigax-grpc0.20.0
com.google.authgoogle-auth-library-appengine0.6.1
com.google.authgoogle-auth-library-credentials0.7.1
com.google.authgoogle-auth-library-oauth2-http0.7.1
com.google.auto.serviceauto-service1.0-rc2
com.google.autoauto-common0.3
com.google.cloud.bigdataossgcsio1.4.5
com.google.cloud.bigdataossutil1.4.5
com.google.cloud.bigtablebigtable-client-core0.9.7.1
com.google.cloud.bigtablebigtable-protos0.9.7.1
com.google.cloud.datastoredatastore-v1-proto-client1.4.0
com.google.cloud.datastoredatastore-v1-protos1.3.0
com.google.cloudgoogle-cloud-core1.0.2
com.google.cloudgoogle-cloud-core-grpc1.2.0
com.google.cloudgoogle-cloud-spanner0.20.0-beta
com.google.code.findbugsjsr3053.0.1
com.google.code.gsongson2.7
com.google.guavaguava20.0
com.google.http-clientgoogle-http-client1.22.0
com.google.http-clientgoogle-http-client-jackson1.22.0
com.google.http-clientgoogle-http-client-jackson21.22.0
com.google.http-clientgoogle-http-client-protobuf1.20.0
com.google.instrumentationinstrumentation-api0.3.0
com.google.oauth-clientgoogle-oauth-client1.22.0
com.google.oauth-clientgoogle-oauth-client-java61.20.0
com.google.protobuf.nanoprotobuf-javanano3.0.0-alpha-5
com.google.protobufprotobuf-java3.2.0
com.google.protobufprotobuf-java-util3.2.0
commons-codeccommons-codec1.3
commons-loggingcommons-logging1.2
com.squareup.okhttpokhttp2.5.0
com.squareup.okiookio1.6.0
com.thoughtworks.paranamerparanamer2.7
io.dropwizard.metricsmetrics-core3.1.2
io.grpcgrpc-all1.2.0
io.grpcgrpc-auth1.2.0
io.grpcgrpc-context1.2.0
io.grpcgrpc-core1.2.0
io.grpcgrpc-netty1.2.0
io.grpcgrpc-okhttp1.2.0
io.grpcgrpc-protobuf1.2.0
io.grpcgrpc-protobuf-lite1.2.0
io.grpcgrpc-protobuf-nano1.2.0
io.grpcgrpc-stub1.2.0
io.nettynetty-buffer4.1.8.Final
io.nettynetty-codec4.1.8.Final
io.nettynetty-codec-http24.1.8.Final
io.nettynetty-codec-http4.1.8.Final
io.nettynetty-codec-socks4.1.8.Final
io.nettynetty-common4.1.8.Final
io.nettynetty-handler4.1.8.Final
io.nettynetty-handler-proxy4.1.8.Final
io.nettynetty-resolver4.1.8.Final
io.nettynetty-tcnative-boringssl-static1.1.33.Fork18
io.nettynetty-transport4.1.8.Final
io.nettynetty-transport-native-epolllinux-x86_64:4.1.8.Final
javax.servletjavax.servlet-api3.1.0
joda-timejoda-time2.4
org.apache.avroavro1.8.1
org.apache.beambeam-runners-core-construction-java2.1.0
org.apache.beambeam-runners-core-java2.1.0
org.apache.beambeam-runners-google-cloud-dataflow-java2.1.0
org.apache.beambeam-sdks-common-fn-api2.1.0
org.apache.beambeam-sdks-common-runner-api2.1.0
org.apache.beambeam-sdks-java-core2.1.0
org.apache.beambeam-sdks-java-extensions-google-cloud-platform-core2.1.0
org.apache.beambeam-sdks-java-extensions-protobuf2.1.0
org.apache.beambeam-sdks-java-harness2.1.0
org.apache.beambeam-sdks-java-io-google-cloud-platform2.1.0
org.apache.commonscommons-compress1.8.1
org.apache.httpcomponentshttpclient4.0.1
org.apache.httpcomponentshttpcore4.0.1
org.codehaus.jacksonjackson-core-asl1.9.13
org.codehaus.jacksonjackson-mapper-asl1.9.13
org.eclipse.jettyjetty-http9.2.10.v20150310
org.eclipse.jettyjetty-io9.2.10.v20150310
org.eclipse.jettyjetty-security9.2.10.v20150310
org.eclipse.jettyjetty-server9.2.10.v20150310
org.eclipse.jettyjetty-servlet9.2.10.v20150310
org.eclipse.jettyjetty-util9.2.10.v20150310
org.jsonjson20160810
org.slf4jslf4j-api1.7.14
org.slf4jslf4j-jdk141.7.14
org.threetenthreetenbp1.3.3
org.tukaanixz1.5
org.xerial.snappysnappy-java1.1.4-M3
Additionally, the Dataflow Worker starts with the following logging JARs prepended to the classpath:
GroupId ArtifactId Version
org.apache.logging.log4jlog4j-to-slf4j2.5
org.slf4jjcl-over-slf4j1.7.14
org.slf4jlog4j-over-slf4j1.7.14
org.slf4jslf4j-api1.7.14
org.slf4jslf4j-jdk141.7.14

Monitor your resources on the go

Get the Google Cloud Console app to help you manage your projects.

Send feedback about...

Cloud Dataflow Documentation