The BigQuery to Bigtable template is a batch pipeline that copies data from a BigQuery table into an existing Bigtable table. The template can either read the entire table or read specific records using a supplied query.
Pipeline requirements
- The source BigQuery table must exist.
- The Bigtable table must exist.
- The worker service account
needs the
roles/bigquery.datasets.create
permission. For more information, see Introduction to IAM.
Template parameters
Parameter | Description |
---|---|
readIdColumn |
The name of the BigQuery column storing the unique identifier of the row. |
inputTableSpec |
Optional: The BigQuery table to read from. Format:
If you specify You must specify either |
query |
Optional: The SQL query to use to read data from BigQuery. If
the BigQuery dataset is in a different project than the
Dataflow job, specify the full dataset name in the SQL query,
as follows:
You must specify either |
useLegacySql |
Optional: Set to true to use legacy SQL. This parameter only applies when
using the query parameter. Default: false . |
bigtableWriteInstanceId |
The ID of the Bigtable instance that contains the table. |
bigtableWriteTableId |
The ID of the Bigtable table to write to. |
bigtableWriteColumnFamily |
The name of the column family of the Bigtable table to write data into. |
bigtableWriteAppProfile |
Optional: The ID of the Bigtable application profile to be used for the export. If you do not specify an app profile, Bigtable uses the default app profile of the instance. |
bigtableWriteProjectId |
Optional: The ID of the Google Cloud project of the Bigtable instance that you want to write data to. |
bigtableBulkWriteLatencyTargetMs |
Optional: The latency target of Bigtable in milliseconds for latency-based throttling. |
bigtableBulkWriteMaxRowKeyCount |
Optional: The max number of row keys in a Bigtable batch write operation. |
bigtableBulkWriteMaxRequestSizeBytes |
Optional: The max amount of bytes to include per Bigtable batch write operation. |
bigtableRpcAttemptTimeoutMs |
Optional: The timeout for each Bigtable RPC attempt in milliseconds. |
bigtableRpcTimeoutMs |
Optional: The total timeout for a Bigtable RPC operation in milliseconds. |
bigtableAdditionalRetryCodes |
Optional: The additional retry codes. |
Run the template
Console
- Go to the Dataflow Create job from template page. Go to Create job from template
- In the Job name field, enter a unique job name.
- Optional: For Regional endpoint, select a value from the drop-down menu. The default
region is
us-central1
.For a list of regions where you can run a Dataflow job, see Dataflow locations.
- From the Dataflow template drop-down menu, select the BigQuery to Bigtable template.
- In the provided parameter fields, enter your parameter values.
- Click Run job.
gcloud
In your shell or terminal, run the template:
gcloud dataflow flex-template run JOB_NAME \ --project=PROJECT_ID \ --region=REGION_NAME \ --template-file-gcs-location=gs://dataflow-templates-REGION_NAME/VERSION/flex/BigQuery_to_Bigtable \ --parameters \ readIdColumn=READ_COLUMN_ID,\ inputTableSpec=INPUT_TABLE_SPEC,\ bigtableWriteInstanceId=BIGTABLE_INSTANCE_ID,\ bigtableWriteTableId=BIGTABLE_TABLE_ID,\ bigtableWriteColumnFamily=BIGTABLE_COLUMN_FAMILY
Replace the following:
PROJECT_ID
: the Google Cloud project ID where you want to run the Dataflow jobJOB_NAME
: a unique job name of your choiceREGION_NAME
: the region where you want to deploy your Dataflow job—for example,us-central1
VERSION
: the version of the template that you want to useYou can use the following values:
latest
to use the latest version of the template, which is available in the non-dated parent folder in the bucket— gs://dataflow-templates-REGION_NAME/latest/- the version name, like
2023-09-12-00_RC00
, to use a specific version of the template, which can be found nested in the respective dated parent folder in the bucket— gs://dataflow-templates-REGION_NAME/
READ_COLUMN_ID
: your BigQuery unique id column.INPUT_TABLE_SPEC
: your BigQuery table name.BIGTABLE_INSTANCE_ID
: your Bigtable instance id.BIGTABLE_TABLE_ID
: your Bigtable table id.BIGTABLE_COLUMN_FAMILY
: your Bigtable table column family.
API
To run the template using the REST API, send an HTTP POST request. For more information on the
API and its authorization scopes, see
projects.templates.launch
.
POST https://dataflow.googleapis.com/v1b3/projects/PROJECT_ID/locations/LOCATION/flexTemplates:launch { "launch_parameter": { "jobName": "JOB_NAME", "parameters": { "readIdColumn": "READ_COLUMN_ID", "inputTableSpec": "INPUT_TABLE_SPEC", "bigtableWriteInstanceId": "BIGTABLE_INSTANCE_ID", "bigtableWriteTableId": "BIGTABLE_TABLE_ID", "bigtableWriteColumnFamily": "BIGTABLE_COLUMN_FAMILY" }, "containerSpecGcsPath": "gs://dataflow-templates-LOCATION/VERSION/flex/BigQuery_to_Bigtable", } }
Replace the following:
PROJECT_ID
: the Google Cloud project ID where you want to run the Dataflow jobJOB_NAME
: a unique job name of your choiceLOCATION
: the region where you want to deploy your Dataflow job—for example,us-central1
VERSION
: the version of the template that you want to useYou can use the following values:
latest
to use the latest version of the template, which is available in the non-dated parent folder in the bucket— gs://dataflow-templates-REGION_NAME/latest/- the version name, like
2023-09-12-00_RC00
, to use a specific version of the template, which can be found nested in the respective dated parent folder in the bucket— gs://dataflow-templates-REGION_NAME/
READ_COLUMN_ID
: your BigQuery unique id column.INPUT_TABLE_SPEC
: your BigQuery table name.BIGTABLE_INSTANCE_ID
: your Bigtable instance id.BIGTABLE_TABLE_ID
: your Bigtable table id.BIGTABLE_COLUMN_FAMILY
: your Bigtable table column family.
What's next
- Learn about Dataflow templates.
- See the list of Google-provided templates.