The Spanner change streams to the Pub/Sub template is a streaming pipeline that streams Spanner data change records and writes them into Pub/Sub topics using Dataflow Runner V2.
To output your data to a new Pub/Sub topic, you need to first create the topic. After creation, Pub/Sub automatically generates and attaches a subscription to the new topic. If you try to output data to a Pub/Sub topic that doesn't exist, the dataflow pipeline throws an exception, and the pipeline gets stuck as it continuously tries to make a connection.
If the necessary Pub/Sub topic already exists, you can output data to that topic.
For more information, see About change streams, Build change streams connections with Dataflow, and Change streams best practices.
Pipeline requirements
- The Spanner instance must exist before running the pipeline.
- The Spanner database must exist prior to running the pipeline.
- The Spanner metadata instance must exist prior to running the pipeline.
- The Spanner metadata database must exist prior to running the pipeline.
- The Spanner change stream must exist prior to running the pipeline.
- The Pub/Sub topic must exist prior to running the pipeline.
Template parameters
Required parameters
- spannerInstanceId: The Spanner instance to read change streams from.
- spannerDatabase: The Spanner database to read change streams from.
- spannerMetadataInstanceId: The Spanner instance to use for the change streams connector metadata table.
- spannerMetadataDatabase: The Spanner database to use for the change streams connector metadata table.
- spannerChangeStreamName: The name of the Spanner change stream to read from.
- pubsubTopic: The Pub/Sub topic for change streams output.
Optional parameters
- spannerProjectId: The project to read change streams from. This project is also where the change streams connector metadata table is created. The default for this parameter is the project where the Dataflow pipeline is running.
- spannerDatabaseRole: The Spanner database role to use when running the template. This parameter is required only when the IAM principal who is running the template is a fine-grained access control user. The database role must have the
SELECT
privilege on the change stream and theEXECUTE
privilege on the change stream's read function. For more information, see Fine-grained access control for change streams (https://cloud.google.com/spanner/docs/fgac-change-streams). - spannerMetadataTableName: The Spanner change streams connector metadata table name to use. If not provided, Spanner automatically creates the streams connector metadata table during the pipeline flow change. You must provide this parameter when updating an existing pipeline. Don't use this parameter for other cases.
- startTimestamp: The starting DateTime (https://tools.ietf.org/html/rfc3339), inclusive, to use for reading change streams. For example, ex- 2021-10-12T07:20:50.52Z. Defaults to the timestamp when the pipeline starts, that is, the current time.
- endTimestamp: The ending DateTime (https://tools.ietf.org/html/rfc3339), inclusive, to use for reading change streams. For example, ex- 2021-10-12T07:20:50.52Z. Defaults to an infinite time in the future.
- spannerHost: The Cloud Spanner endpoint to call in the template. Only used for testing. For example,
https://spanner.googleapis.com
. Defaults to: https://spanner.googleapis.com. - outputDataFormat: The format of the output. Output is wrapped in many PubsubMessages and sent to a Pub/Sub topic. Allowed formats are JSON and AVRO. Default is JSON.
- pubsubAPI: The Pub/Sub API used to implement the pipeline. Allowed APIs are
pubsubio
andnative_client
. For a small number of queries per second (QPS),native_client
has less latency. For a large number of QPS,pubsubio
provides better and more stable performance. The default ispubsubio
. - pubsubProjectId: Project of Pub/Sub topic. The default for this parameter is the project where the Dataflow pipeline is running.
- rpcPriority: The request priority for Spanner calls. Allowed values are HIGH, MEDIUM, and LOW. Defaults to: HIGH).
- includeSpannerSource: Whether or not to include the spanner database id and instance id to read the change stream from in the output message data. Defaults to: false.
- outputMessageMetadata: The string value for the custom field outputMessageMetadata in output pub/sub message. Defaults to empty and the field outputMessageMetadata is only populated if this value is non-empty. Please escape any special characters when entering the value here(ie: double quotes).
Run the template
Console
- Go to the Dataflow Create job from template page. Go to Create job from template
- In the Job name field, enter a unique job name.
- Optional: For Regional endpoint, select a value from the drop-down menu. The default
region is
us-central1
.For a list of regions where you can run a Dataflow job, see Dataflow locations.
- From the Dataflow template drop-down menu, select the Cloud Spanner change streams to Pub/Sub template.
- In the provided parameter fields, enter your parameter values.
- Click Run job.
gcloud
In your shell or terminal, run the template:
gcloud dataflow flex-template run JOB_NAME \ --template-file-gcs-location=gs://dataflow-templates-REGION_NAME/VERSION/flex/Spanner_Change_Streams_to_PubSub \ --region REGION_NAME \ --parameters \ spannerInstanceId=SPANNER_INSTANCE_ID,\ spannerDatabase=SPANNER_DATABASE,\ spannerMetadataInstanceId=SPANNER_METADATA_INSTANCE_ID,\ spannerMetadataDatabase=SPANNER_METADATA_DATABASE,\ spannerChangeStreamName=SPANNER_CHANGE_STREAM,\ pubsubTopic=PUBSUB_TOPIC
Replace the following:
JOB_NAME
: a unique job name of your choiceVERSION
: the version of the template that you want to useYou can use the following values:
latest
to use the latest version of the template, which is available in the non-dated parent folder in the bucket— gs://dataflow-templates-REGION_NAME/latest/- the version name, like
2023-09-12-00_RC00
, to use a specific version of the template, which can be found nested in the respective dated parent folder in the bucket— gs://dataflow-templates-REGION_NAME/
REGION_NAME
: the region where you want to deploy your Dataflow job—for example,us-central1
SPANNER_INSTANCE_ID
: Spanner instance IDSPANNER_DATABASE
: Spanner databaseSPANNER_METADATA_INSTANCE_ID
: Spanner metadata instance IDSPANNER_METADATA_DATABASE
: Spanner metadata databaseSPANNER_CHANGE_STREAM
: Spanner change streamPUBSUB_TOPIC
: The Pub/Sub topic for change streams output
API
To run the template using the REST API, send an HTTP POST request. For more information on the
API and its authorization scopes, see
projects.templates.launch
.
POST https://dataflow.googleapis.com/v1b3/projects/PROJECT_ID/locations/LOCATION/flexTemplates:launch { "launch_parameter": { "jobName": "JOB_NAME", "parameters": { "spannerInstanceId": "SPANNER_INSTANCE_ID", "spannerDatabase": "SPANNER_DATABASE", "spannerMetadataInstanceId": "SPANNER_METADATA_INSTANCE_ID", "spannerMetadataDatabase": "SPANNER_METADATA_DATABASE", "spannerChangeStreamName": "SPANNER_CHANGE_STREAM", "pubsubTopic": "PUBSUB_TOPIC" }, "containerSpecGcsPath": "gs://dataflow-templates-LOCATION/VERSION/flex/Spanner_Change_Streams_to_PubSub", } }
Replace the following:
PROJECT_ID
: the Google Cloud project ID where you want to run the Dataflow jobJOB_NAME
: a unique job name of your choiceVERSION
: the version of the template that you want to useYou can use the following values:
latest
to use the latest version of the template, which is available in the non-dated parent folder in the bucket— gs://dataflow-templates-REGION_NAME/latest/- the version name, like
2023-09-12-00_RC00
, to use a specific version of the template, which can be found nested in the respective dated parent folder in the bucket— gs://dataflow-templates-REGION_NAME/
LOCATION
: the region where you want to deploy your Dataflow job—for example,us-central1
SPANNER_INSTANCE_ID
: Spanner instance IDSPANNER_DATABASE
: Spanner databaseSPANNER_METADATA_INSTANCE_ID
: Spanner metadata instance IDSPANNER_METADATA_DATABASE
: Spanner metadata databaseSPANNER_CHANGE_STREAM
: Spanner change streamPUBSUB_TOPIC
: The Pub/Sub topic for change streams output
What's next
- Learn about Dataflow templates.
- See the list of Google-provided templates.