Read from Cloud Storage to Dataflow

To read data from Cloud Storage to Dataflow, use the Apache Beam TextIO or AvroIO I/O connector.

Include the Google Cloud library dependency

To use the TextIO or AvroIO connector with Cloud Storage, include the following dependency. This library provides a schema handler for "gs://" filenames.

Java

<dependency>
  <groupId>org.apache.beam</groupId>
  <artifactId>beam-sdks-java-io-google-cloud-platform</artifactId>
  <version>${beam.version}</version>
</dependency>

Python

apache-beam[gcp]==VERSION

Go

import _ "github.com/apache/beam/sdks/v2/go/pkg/beam/io/filesystem/gcs"

For more information, see Install the Apache Beam SDK.

Enable gRPC on Apache Beam I/O connector on Dataflow

You can connect to Cloud Storage using gRPC through the Apache Beam I/O connector on Dataflow. gRPC is a high performance open-source remote procedure call (RPC) framework developed by Google that you can use to interact with Cloud Storage.

To speed up your Dataflow job's read requests to Cloud Storage, you can enable the Apache Beam I/O connector on Dataflow to use gRPC.

Command line

  1. Ensure that you use the Apache Beam SDK version 2.55.0 or later.
  2. To run a Dataflow job, use --additional-experiments=use_grpc_for_gcs pipeline option. For information about the different pipeline options, see Optional flags.

Apache Beam SDK

  1. Ensure that you use the Apache Beam SDK version 2.55.0 or later.
  2. To run a Dataflow job, use --experiments=use_grpc_for_gcs pipeline option. For information about the different pipeline options, see Basic options.

You can configure Apache Beam I/O connector on Dataflow to generate gRPC related metrics in Cloud Monitoring. The gRPC related metrics can help you to do the following:

  • Monitor and optimize the performance of gRPC requests to Cloud Storage.
  • Troubleshoot and debug issues.
  • Gain insights into your application's usage and behavior.

For information about how to configure Apache Beam I/O connector on Dataflow to generate gRPC related metrics, see Use client-side metrics. If gathering metrics isn't necessary for your use case, you can choose to opt-out of metrics collection. For instructions, see Opt-out of client-side metrics.

Parallelism

The TextIO and AvroIO connectors support two levels of parallelism:

  • Individual files are keyed separately, so that multiple workers can read them.
  • If the files are uncompressed, the connector can read sub-ranges of each file separately, leading to a very high level of parallelism. This splitting is only possible if each line in the file is a meaningful record. For example, it's not available by default for JSON files.

Performance

The following table shows performance metrics for reading from Cloud Storage. The workloads were run on one e2-standard2 worker, using the Apache Beam SDK 2.49.0 for Java. They did not use Runner v2.

100 M records | 1 kB | 1 column Throughput (bytes) Throughput (elements)
Read 320 MBps 320,000 elements per second

These metrics are based on simple batch pipelines. They are intended to compare performance between I/O connectors, and are not necessarily representative of real-world pipelines. Dataflow pipeline performance is complex, and is a function of VM type, the data being processed, the performance of external sources and sinks, and user code. Metrics are based on running the Java SDK, and aren't representative of the performance characteristics of other language SDKs. For more information, see Beam IO Performance.

Best practices

  • Avoid using watchForNewFiles with Cloud Storage. This approach scales poorly for large production pipelines, because the connector must keep a list of seen files in memory. The list can't be flushed from memory, which reduces the working memory of workers over time. Consider using Pub/Sub notifications for Cloud Storage instead. For more information, see File processing patterns.

  • If both the filename and the file contents are useful data, use the FileIO class to read filenames. For example, a filename might contain metadata that is useful when processing the data in the file. For more information, see Accessing filenames. The FileIO documentation also shows an example of this pattern.

Example

The following example shows how to read from Cloud Storage.

Java

To authenticate to Dataflow, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.

import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.PipelineResult;
import org.apache.beam.sdk.io.TextIO;
import org.apache.beam.sdk.options.Description;
import org.apache.beam.sdk.options.PipelineOptions;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.transforms.MapElements;
import org.apache.beam.sdk.values.TypeDescriptors;

public class ReadFromStorage {
  public static Pipeline createPipeline(Options options) {
    var pipeline = Pipeline.create(options);
    pipeline
        // Read from a text file.
        .apply(TextIO.read().from(
            "gs://" + options.getBucket() + "/*.txt"))
        .apply(
            MapElements.into(TypeDescriptors.strings())
                .via(
                    (x -> {
                      System.out.println(x);
                      return x;
                    })));
    return pipeline;
  }
}

What's next