Create a Dataflow pipeline using Java
This document shows you how to set up your Google Cloud project, create an example pipeline built with the Apache Beam SDK for Java, and run the example pipeline on the Dataflow service. The pipeline reads a text file from Cloud Storage, counts the number of unique words in the file, and then writes the word counts back to Cloud Storage. This tutorial requires Maven, but it's also possible to convert the example project from Maven to Gradle. To learn more, see Optional: Convert from Maven to Gradle.
For step-by-step guidance on this task directly in console, click Guide me:
The following sections take you through the same steps as clicking Guide me.
Before you begin
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Cloud project. Learn how to check if billing is enabled on a project.
-
Enable the Dataflow, Compute Engine, Cloud Logging, Cloud Storage, Google Cloud Storage JSON, BigQuery, Cloud Pub/Sub, Cloud Datastore, and Cloud Resource Manager APIs.
-
Create a service account:
-
In the console, go to the Create service account page.
Go to Create service account - Select your project.
-
In the Service account name field, enter a name. The console fills in the Service account ID field based on this name.
In the Service account description field, enter a description. For example,
Service account for quickstart
. - Click Create and continue.
-
To provide access to your project, grant the following role(s) to your service account: Project > Owner.
In the Select a role list, select a role.
For additional roles, click
Add another role and add each additional role. - Click Continue.
-
Click Done to finish creating the service account.
Do not close your browser window. You will use it in the next step.
-
-
Create a service account key:
- In the console, click the email address for the service account that you created.
- Click Keys.
- Click Add key, and then click Create new key.
- Click Create. A JSON key file is downloaded to your computer.
- Click Close.
-
Set the environment variable
GOOGLE_APPLICATION_CREDENTIALS
to the path of the JSON file that contains your service account key. This variable only applies to your current shell session, so if you open a new session, set the variable again. -
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Cloud project. Learn how to check if billing is enabled on a project.
-
Enable the Dataflow, Compute Engine, Cloud Logging, Cloud Storage, Google Cloud Storage JSON, BigQuery, Cloud Pub/Sub, Cloud Datastore, and Cloud Resource Manager APIs.
-
Create a service account:
-
In the console, go to the Create service account page.
Go to Create service account - Select your project.
-
In the Service account name field, enter a name. The console fills in the Service account ID field based on this name.
In the Service account description field, enter a description. For example,
Service account for quickstart
. - Click Create and continue.
-
To provide access to your project, grant the following role(s) to your service account: Project > Owner.
In the Select a role list, select a role.
For additional roles, click
Add another role and add each additional role. - Click Continue.
-
Click Done to finish creating the service account.
Do not close your browser window. You will use it in the next step.
-
-
Create a service account key:
- In the console, click the email address for the service account that you created.
- Click Keys.
- Click Add key, and then click Create new key.
- Click Create. A JSON key file is downloaded to your computer.
- Click Close.
-
Set the environment variable
GOOGLE_APPLICATION_CREDENTIALS
to the path of the JSON file that contains your service account key. This variable only applies to your current shell session, so if you open a new session, set the variable again. - Create a Cloud Storage bucket:
- In the console, go to the Cloud Storage Browser page.
- Click Create bucket.
- On the Create a bucket page, enter your bucket information. To go to the next
step, click Continue.
- For Name your bucket, enter a unique bucket name. Don't include sensitive information in the bucket name, because the bucket namespace is global and publicly visible.
-
For Choose where to store your data, do the following:
- Select a Location type option.
- Select a Location option.
- For Choose a default storage class for your data, select the following: Standard.
- For Choose how to control access to objects, select an Access control option.
- For Advanced settings (optional), specify an encryption method, a retention policy, or bucket labels.
- Click Create.
- Copy the following, as you need them in a later section:
- Your Cloud Storage bucket name.
- Your Google Cloud project ID. To find this ID, see Identifying projects.
- Download and install the
Java Development Kit (JDK) version 11. (Dataflow continues
to support version 8.) Verify that the
JAVA_HOME
environment variable is set and points to your JDK installation. - Download and install Apache Maven, following Maven's installation guide for your specific operating system.
Get the pipeline code
The Apache Beam SDK is an open source programming model for data processing pipelines. You define these pipelines with an Apache Beam program and can choose a runner, such as Dataflow, to run your pipeline.
- In your shell or terminal, use the
Maven Archetype Plugin to create a Maven project on your computer that
contains the Apache Beam SDK's
WordCount
examples:mvn archetype:generate \ -DarchetypeGroupId=org.apache.beam \ -DarchetypeArtifactId=beam-sdks-java-maven-archetypes-examples \ -DarchetypeVersion=2.40.0 \ -DgroupId=org.example \ -DartifactId=word-count-beam \ -Dversion="0.1" \ -Dpackage=org.apache.beam.examples \ -DinteractiveMode=false
The command creates a new directory called
word-count-beam
under your current directory. Theword-count-beam
directory contains a simplepom.xml
file and a series of example pipelines that count words in text files. - Verify that your
word-count-beam
directory contains thepom.xml
file:Linux or macOS
cd word-count-beam/ ls
The output is the following:
pom.xml src
Windows
cd word-count-beam/ dir
The output is the following:
pom.xml src
- Verify that your Maven project contains the example pipelines:
Linux or macOS
ls src/main/java/org/apache/beam/examples/
The output is the following:
DebuggingWordCount.java WindowedWordCount.java common MinimalWordCount.java WordCount.java
Windows
dir src/main/java/org/apache/beam/examples/
The output is the following:
DebuggingWordCount.java WindowedWordCount.java common MinimalWordCount.java WordCount.java
For a detailed introduction to the Apache Beam concepts that are used in these examples, see
the Apache Beam WordCount Example. The instructions in the next
sections use
WordCount.java
.
Run the pipeline locally
- In your shell or terminal, run the
WordCount
pipeline locally from yourword-count-beam
directory:mvn compile exec:java \ -Dexec.mainClass=org.apache.beam.examples.WordCount \ -Dexec.args="--output=counts"
The output files have the prefix
counts
and are written to theword-count-beam
directory. They contain unique words from the input text and the number of occurrences of each word.
Run the pipeline on the Dataflow service
- In your shell or terminal, build and run the
WordCount
pipeline on the Dataflow service from yourword-count-beam
directory:mvn -Pdataflow-runner compile exec:java \ -Dexec.mainClass=org.apache.beam.examples.WordCount \ -Dexec.args="--project=PROJECT_ID \ --gcpTempLocation=gs://BUCKET_NAME/temp/ \ --output=gs://BUCKET_NAME/output \ --runner=DataflowRunner \ --region=REGION"
Replace the following:
PROJECT_ID
: your Cloud project IDBUCKET_NAME
: the name of your Cloud Storage bucketREGION
: a Dataflow regional endpoint, likeus-central1
View your results
- In the console, go to the Dataflow Jobs page.
Go to JobsThe Jobs page shows the details of all the available jobs, including the status. The wordcount job's Status is Running at first, and then updates to Succeeded.
- In the console, go to the Cloud Storage Browser page.
Go to BrowserThe Browser page displays the list of all the storage buckets in your project.
- Click the storage bucket that you created.
The Bucket details page shows the output files and staging files that your Dataflow job created.
Clean up
To avoid incurring charges to your Google Cloud account for the resources used on this page, follow these steps.
Delete the project
The easiest way to eliminate billing is to delete the Google Cloud project that you created for the quickstart.
- In the console, go to the Manage resources page.
- In the project list, select the project that you want to delete, and then click Delete.
- In the dialog, type the project ID, and then click Shut down to delete the project.
Delete the individual resources
If you want to keep the Google Cloud project that you used in this quickstart, then delete the individual resources:
- In the console, go to the Cloud Storage Browser page.
- Click the checkbox for the bucket that you want to delete.
- To delete the bucket, click Delete, and then follow the instructions.
What's next
- Learn about the Apache Beam programming model.
- Learn how to design, create, and test your own pipeline.
- Work through the WordCount and Mobile Gaming examples.