Dataflow is a managed service for executing a wide variety of data
processing patterns. The documentation on this site shows you how to deploy
your batch and streaming data processing pipelines using
Dataflow, including directions for using service features.
The Apache Beam SDK
is an open source programming model that enables you to develop both batch
and streaming pipelines. You create your pipelines with an Apache Beam
program and then run them on the Dataflow service. The
documentation provides in-depth conceptual information and reference
material for the Apache Beam programming model, SDKs, and other runners.
Apache, Apache Beam, Beam, the
Beam logo, and the Beam firefly mascot are trademarks of The Apache Software Foundation in the
United States and/or other countries.
Deploying production-ready log exports to Splunk using Dataflow
Create a scalable, fault-tolerant log export mechanism using Cloud Logging, Pub/Sub, and Dataflow. Stream your logs and events from resources in Google Cloud into either Splunk Enterprise or Splunk Cloud for IT operations or security use cases.
Explore design patterns and best practices for common logging export scenarios. You might export your logs for several reasons, such as retaining logs for compliance requirements or for running data analytics against the metrics extracted from the logs.