Dataflow is based on the open-source Apache Beam project. This document describes the Apache Beam programming model.
Apache Beam is an open source, unified model for defining both batch and streaming pipelines. The Apache Beam programming model simplifies the mechanics of large-scale data processing. Using one of the Apache Beam SDKs, you build a program that defines the pipeline. Then, you execute the pipeline on a specific platform such as Dataflow. This model lets you concentrate on the logical composition of your data processing job, rather than managing the orchestration of parallel processing.
Apache Beam insulates you from the low-level details of distributed processing, such as coordinating individual workers, sharding datasets, and other such tasks. Dataflow fully manages these low-level details.
A pipeline is a graph of transformations that are applied to collections of
data. In Apache Beam, a collection is called a
PCollection, and a
transform is called a
PCollection can be bounded or unbounded.
PCollection has a known, fixed size, and can be processed using a
batch pipeline. Unbounded
PCollections must use a streaming pipeline, because
the data is processed as it arrives.
Apache Beam provides connectors to read from and write to different systems, including Google Cloud services and third-party technologies such as Apache Kafka.
The following diagram shows an Apache Beam pipeline.
You can write
PTransforms that perform arbitrary logic. The Apache Beam
SDKs also provide a library of useful
PTransforms out of the box, including
- Filter out all elements that don't satisfy a predicate.
- Apply a 1-to-1 mapping function over each element.
- Group elements by key.
- Count the elements in a collection
- Count the elements associated with each key in a key-value collection.
To run an Apache Beam pipeline using Dataflow, perform the following steps:
- Use the Apache Beam SDK to define and build the pipeline. Alternatively, you can deploy a prebuilt pipeline by using a Dataflow template.
- Use Dataflow to run the pipeline. Dataflow allocates a pool of VMs to run the job, deploys the code to the VMs, and orchestrates running the job.
- Dataflow performs optimizations on the backend to make your pipeline run efficiently and take advantage of parallelization.
- While a job is running and after it completes, use Dataflow management capabilities to monitor progress and troubleshoot.
Apache Beam Concepts
This section contains summaries of fundamental concepts.
- A pipeline encapsulates the entire series of computations that are involved in
reading input data, transforming that data, and writing output data. The input
source and output sink can be the same type or of different types, letting you
convert data from one format to another. Apache Beam programs start by
Pipelineobject, and then using that object as the basis for creating the pipeline's datasets. Each pipeline represents a single, repeatable job.
PCollectionrepresents a potentially distributed, multi-element dataset that acts as the pipeline's data. Apache Beam transforms use
PCollectionobjects as inputs and outputs for each step in your pipeline. A
PCollectioncan hold a dataset of a fixed size or an unbounded dataset from a continuously updating data source.
- A transform represents a processing operation that transforms data. A
transform takes one or more
PCollections as input, performs an operation that you specify on each element in that collection, and produces one or more
PCollections as output. A transform can perform nearly any kind of processing operation, including performing mathematical computations on data, converting data from one format to another, grouping data together, reading and writing data, filtering data to output only the elements you want, or combining data elements into single values.
ParDois the core parallel processing operation in the Apache Beam SDKs, invoking a user-specified function on each of the elements of the input
ParDocollects the zero or more output elements into an output
ParDotransform processes elements independently and possibly in parallel.
- Pipeline I/O
- Apache Beam I/O connectors let you read data into your pipeline and write output data from your pipeline. An I/O connector consists of a source and a sink. All Apache Beam sources and sinks are transforms that let your pipeline work with data from several different data storage formats. You can also write a custom I/O connector.
- Aggregation is the process of computing some value from multiple input elements. The primary computational pattern for aggregation in Apache Beam is to group all elements with a common key and window. Then, it combines each group of elements using an associative and commutative operation.
- User-defined functions (UDFs)
- Some operations within Apache Beam allow executing user-defined code as a
way of configuring the transform. For
ParDo, user-defined code specifies the operation to apply to every element, and for
Combine, it specifies how values should be combined. A pipeline might contain UDFs written in a different language than the language of your runner. A pipeline might also contain UDFs written in multiple languages.
- Runners are the software that accepts a pipeline and executes it. Most runners are translators or adapters to massively parallel big-data processing systems. Other runners exist for local testing and debugging.
- A transform that reads from an external storage system. A pipeline typically reads input data from a source. The source has a type, which may be different from the sink type, so you can change the format of data as it moves through the pipeline.
- A transform that writes to an external data storage system, like a file or a database.
- A PTransform for reading and writing text files. The TextIO source and sink
support files compressed with
bzip2. The TextIO input source supports JSON. However, for the Dataflow service to be able to parallelize input and output, your source data must be delimited with a line feed. You can use a regular expression to target specific files with the TextIO source. Dataflow supports general wildcard patterns. Your glob expression can appear anywhere in the file path. However, Dataflow does not support recursive wildcards (
- Event time
- The time a data event occurs, determined by the timestamp on the data element itself. This contrasts with the time the actual data element gets processed at any stage in the pipeline.
- Windowing enables grouping operations over unbounded collections by dividing the collection into windows of finite collections according to the timestamps of the individual elements. A windowing function tells the runner how to assign elements to an initial window, and how to merge windows of grouped elements. Apache Beam lets you define different kinds of windows or use the predefined windowing functions.
- Apache Beam tracks a watermark, which is the system's notion of when all data in a certain window can be expected to have arrived in the pipeline. Apache Beam tracks a watermark because data is not guaranteed to arrive in a pipeline in time order or at predictable intervals. In addition, there are no guarantees that data events will appear in the pipeline in the same order that they were generated.
- Triggers determine when to emit aggregated results as data arrives. For bounded data, results are emitted after all of the input has been processed. For unbounded data, results are emitted when the watermark passes the end of the window, indicating that the system believes all input data for that window has been processed. Apache Beam provides several predefined triggers and lets you combine them.
- To learn more about the basic concepts of building pipelines using the Apache Beam SDKs, see the Apache Beam Programming Guide in the Apache Beam documentation.
- For more details about the Apache Beam capabilities supported by Dataflow, see the Apache Beam capability matrix.