Cloud Dataflow

Simplified stream and batch data processing, with equal reliability and expressiveness

Try It Free

Faster development, easier management

Cloud Dataflow is a fully-managed service for transforming and enriching data in stream (real time) and batch (historical) modes with equal reliability and expressiveness -- no more complex workarounds or compromises needed. And with its serverless approach to resource provisioning and management, you have access to virtually limitless capacity to solve your biggest data processing challenges, while paying only for what you use.

Cloud Dataflow unlocks transformational use cases across industries, including:

  • check Clickstream, Point-of-Sale, and segmentation analysis in retail
  • check Fraud detection in financial services
  • check Personalized user experience in gaming
  • check IoT analytics in manufacturing, healthcare, and logistics

Accelerate development for batch & streaming

Cloud Dataflow supports fast, simplified pipeline development via expressive Java and Python APIs in the Apache Beam SDK, which provides a rich set of windowing and session analysis primitives as well as an ecosystem of source and sink connectors. Plus, Beam’s unique, unified development model lets you reuse more code across streaming and batch pipelines.


Simplify operations & management

GCP’s serverless approach removes operational overhead with performance, scaling, availability, security and compliance handled automatically so users can focus on programming instead of managing server clusters. Integration with Stackdriver, GCP’s unified logging and monitoring solution, lets you monitor and troubleshoot your pipelines as they are running. Rich visualization, logging, and advanced alerting help you identify and respond to potential issues.


Build on a foundation for machine learning

Use Cloud Dataflow as a convenient integration point to bring predictive analytics to fraud detection, real-time personalization and similar use cases by adding TensorFlow-based Cloud Machine Learning models and APIs to your data processing pipelines.


Use your favorite and familiar tools

Cloud Dataflow seamlessly integrates with GCP services for streaming events ingestion (Cloud Pub/Sub), data warehousing (BigQuery), machine learning (Cloud Machine Learning), and more. Its Beam-based SDK also lets developers build custom extensions and even choose alternative execution engines, such as Apache Spark via Cloud Dataproc or on-premises. For Apache Kafka users, a Cloud Dataflow connector makes integration with GCP easy.


Data Transformation with Cloud Dataflow



Automated Resource Management
Cloud Dataflow automates provisioning and management of processing resources to minimize latency and maximize utilization; no more spinning up instances by hand or reserving them.
Dynamic Work Rebalancing
Automated and optimized work partitioning dynamically rebalances lagging work. No need to chase down “hot keys” or pre-process your input data.
Reliable & Consistent Exactly-once Processing
Provides built-in support for fault-tolerant execution that is consistent and correct regardless of data size, cluster size, processing pattern or pipeline complexity.
Horizontal Auto-scaling
Horizontal auto-scaling of worker resources for optimum throughput results in better overall price-to-performance.
Unified Programming Model
Apache Beam SDK offers equally rich MapReduce-like operations, powerful data windowing, and fine-grained correctness control for streaming and batch data alike.
Community-driven Innovation
Developers wishing to extend the Cloud Dataflow programming model can fork and/or contribute to Apache Beam.

Partnerships & Integrations

Google Cloud Platform partners and 3rd party developers have developed integrations with Dataflow to quickly and easily enable powerful data processing tasks of any size.




Sales Force




“Running our pipelines on Cloud Dataflow lets us focus on programming without having to worry about deploying and maintaining instances running our code (a hallmark of GCP overall).”

- Jibran Saithi Lead Architect, Qubit

User-friendly Pricing

Cloud Dataflow jobs are billed per minute, based on the actual use of Cloud Dataflow batch or streaming workers. Jobs that consume additional GCP resources -- such as Cloud Storage or Cloud Pub/Sub -- are each billed per that service’s pricing.

Iowa Oregon Northern Virginia South Carolina Montréal São Paulo Belgium London Netherlands Frankfurt Mumbai Singapore Sydney Taiwan Tokyo
Cloud Dataflow Worker Type vCPU
$ GB/hr
Storage - Standard Persistent Disk
$ GB/hr
Storage - SSD Persistent Disk
$ GB/hr
Cloud Dataflow Shuffle 3
$ GB/hr
Batch 1
Streaming 2
If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.

1 Batch worker defaults: 1 vCPU, 3.75GB memory, 250GB Persistent Disk

2 Streaming worker defaults: 4 vCPU, 15GB memory, 420GB Persistent Disk

3 Service-based Cloud Dataflow Shuffle is currently available in beta for batch pipelines in the us-central1 (Iowa) and europe-west1 (Belgium) regions only. It will become available in other regions in the future.