This document describes how to enable data lineage for your Dataproc Spark jobs either at the project or cluster level.
Overview
Data lineage is a Dataplex feature that lets you track how data moves through your systems: where it comes from, where it is passed to, and what transformations are applied to it.
Data lineage is available for all Dataproc Spark jobs except SparkR, with Dataproc Compute Engine 2.0.74+ and 2.1.22+ images, and supports BigQuery and Cloud Storage data sources.
Once you enable the feature in your Dataproc cluster, Dataproc Spark jobs capture data lineage events and publish them to the Dataplex Data Lineage API. Dataproc integrates with the Data Lineage API through OpenLineage, using the OpenLineage Spark plugin.
You can access data lineage information through Dataplex, using the following:
Limitations
Data lineage is not available for SparkR or Spark streaming jobs.
Before you begin
In the Google Cloud console, on the project selector page, select the project that contains the Dataproc cluster for which you want to track lineage.
Enable Data Lineage API and Data Catalog API.
Required roles
To get the permissions that you need to use data lineage in Dataproc, ask your administrator to grant you the following IAM roles on the Dataproc cluster VM service account:
-
View data lineage visualization in Data Catalog or to use the Data Lineage API:
Data Lineage Viewer (
roles/datalineage.viewer
) -
Produce data lineage manually using the API:
Data Lineage Events Producer (
roles/datalineage.producer
) -
Edit lineage using the API:
Data Lineage Editor (
roles/datalineage.editor
) -
Perform all operations on lineage:
Data Lineage Administrator (
roles/datalineage.admin
)
For more information about granting roles, see Manage access to projects, folders, and organizations.
You might also be able to get the required permissions through custom roles or other predefined roles.
Enable data lineage at the project level
You can enable data lineage at the project level. Supported Spark jobs run on clusters created after data lineage is enabled on a project will have data lineage enabled. Note that jobs run on existing clusters—clusters that were created prior to enabling data lineage at the project level—won't have data lineage enabled.
How to enable data lineage at the project level
To enable data lineage at the project level, set the following custom project metadata:
Key | Value |
---|---|
DATAPROC_LINEAGE_ENABLED |
true |
DATAPROC_CLUSTER_SCOPES |
https://www.googleapis.com/auth/cloud-platform |
You can disable data lineage at the project level by setting the
DATAPROC_LINEAGE_ENABLED
metadata to false
.
Enable data lineage at the cluster level
You can enable data lineage when you create a cluster so that all supported Spark jobs submitted to the cluster will have data lineage enabled.
How to enable data lineage at the cluster level
To enable data lineage on a cluster,
create a Dataproc cluster
with the dataproc:dataproc.lineage.enabled
cluster property set to true
.
2.0 image version clusters: Dataproc cluster VM access
cloud-platform
scope
is required for data lineage. Dataproc image version clusters created with image
version 2.1 and later have cloud-platform
enabled. If you specify Dataproc
image version 2.0
when you create a cluster, set the
scope
to cloud-platform
.
gcloud CLI example:
gcloud dataproc clusters create CLUSTER_NAME \
--project PROJECT_ID \
--region REGION \
--properties 'dataproc:dataproc.lineage.enabled=true'
Disable data lineage on a job
If you enable data lineage at the cluster level, you can disable
data lineage on a specific job by passing the spark.extraListeners
property with an empty value ("") when you submit the job.
Once enabled, you cannot disable data lineage on the cluster. To eliminate
data lineage on all cluster jobs, you can
recreate the cluster
without the dataproc:dataproc.lineage.enabled
property.
Submit a Spark job
When you submit a Spark job on a Dataproc cluster that was created with data lineage enabled, Dataproc captures and reports the data lineage information to the Data Lineage API.
gcloud dataproc jobs submit spark \
--cluster=CLUSTER_NAME \
--project PROJECT_ID \
--region REGION \
--class CLASS \
--jars=gs://APPLICATION_BUCKET/spark-application.jar \
--properties=spark.openlineage.namespace=CUSTOM_NAMESPACE,spark.openlineage.appName=CUSTOM_APPNAME
Notes:
- Adding the
spark.openlineage.namespace
andspark.openlineage.appName
properties, which are used to uniquely identify the job, is optional. If you don't add these properties, Dataproc uses the following default values:- Default value for
spark.openlineage.namespace
: PROJECT_ID - Default value for
spark.openlineage.appName
:spark.app.name
- Default value for
View lineage graphs in Dataplex
A lineage visualization graph displays relationships between your project resources and the processes that created them. You can view data lineage information in the form of a graph visualization in the Google Cloud console, or retrieve it from the Data Lineage API in the form of JSON data.
For more information, see View lineage graphs in Dataplex UI.
Example:
The following Spark job reads data from a BigQuery table, and writes to another BigQuery table.
#!/usr/bin/env python
from pyspark.sql import SparkSession
import sys
spark = SparkSession \
.builder \
.appName('LINEAGE_BQ_TO_BQ') \
.getOrCreate()
bucket = lineage-ol-test
spark.conf.set('temporaryGcsBucket', bucket)
source = sample.source
words = spark.read.format('bigquery') \
.option('table', source) \
.load()
words.createOrReplaceTempView('words')
word_count = spark.sql('SELECT word, SUM(word_count) AS word_count FROM words GROUP BY word')
destination = sample.destination
word_count.write.format('bigquery') \
.option('table', destination) \
.save()
The Spark job creates the following lineage graph in the Dataplex UI:
What's next
- Learn more about data lineage.