The JDBC to BigQuery template is a batch pipeline that copies data from a relational database table into an existing BigQuery table. This pipeline uses JDBC to connect to the relational database. Use this template to copy data from any relational database with available JDBC drivers into BigQuery.
For an extra layer of protection, you can pass in a Cloud KMS key, along with a Base64-encoded username, password, and connection string parameters encrypted with the Cloud KMS key. For additional details about encrypting your username, password, and connection string parameters, see the Cloud KMS API encryption endpoint.
Pipeline requirements
- The JDBC drivers for the relational database must be available.
- The BigQuery table must exist before pipeline execution.
- The BigQuery table must have a compatible schema.
- The relational database must be accessible from the subnet where Dataflow runs.
Template parameters
Parameter | Description |
---|---|
driverJars |
The comma-separated list of driver JAR files. For example:
gs://your-bucket/driver_jar1.jar,gs://your-bucket/driver_jar2.jar . |
driverClassName |
The JDBC driver class name. For example: com.mysql.jdbc.Driver . |
connectionURL |
The JDBC connection URL string. For example, jdbc:mysql://some-host:3306/sampledb . You can pass in this value as a
string that's Base64-encoded and then encrypted with a Cloud KMS key. Note the difference between an
Oracle non-RAC database connection string (jdbc:oracle:thin:@some-host:<port>:<sid> ) and
an Oracle RAC database connection string
(jdbc:oracle:thin:@//some-host[:<port>]/<service_name> ). For example:
jdbc:mysql://some-host:3306/sampledb . |
outputTable |
The BigQuery table location to write the output to. The name must use the format
<project>:<dataset>.<table_name> . The table's schema must match input objects. For
example: <my-project>:<my-dataset>.<my-table> . |
bigQueryLoadingTemporaryDirectory |
The temporary directory for the BigQuery loading process. For example:
gs://your-bucket/your-files/temp_dir . |
connectionProperties |
Optional: The properties string to use for the JDBC connection. Use the string format [propertyName=property;]*. For example:
unicode=true;characterEncoding=UTF-8 . |
username |
Optional: The username to use for the JDBC connection. You can pass this value in as a Base64-encoded string encrypted with a Cloud KMS key. |
password |
Optional: The password to use for the JDBC connection. You can pass this value in as a Base64-encoded string encrypted with a Cloud KMS key. |
query |
Optional: The query to run on the source to extract the data. For example: select * from sampledb.sample_table . |
KMSEncryptionKey |
Optional: The Cloud KMS encryption key to use decrypt the username, password, and connection string. If you pass in a Cloud KMS key,
the username, password, and connection string must all be passed in encrypted. For example:
projects/your-project/locations/global/keyRings/your-keyring/cryptoKeys/your-key . |
useColumnAlias |
Optional: If enabled (set to true ), the pipeline uses the column alias ("AS") instead of the column name to map the rows to
BigQuery. Defaults to false . |
isTruncate |
Optional: If enabled (set to true ), the pipeline truncates before loading data into BigQuery. Defaults to false , which
causes the pipeline to append data. |
partitionColumn |
Optional: If this parameter is provided (along with table ), JdbcIO reads the table in parallel by executing multiple
instances of the query on the same table (subquery) using ranges. Currently, only supports Long partition columns. |
table |
Optional: The table to read from when using partitions. This parameter also accepts a subquery in parentheses. For example: (select
id, name from Person as subq). |
numPartitions |
Optional: The number of partitions. With the lower and upper bound, this value forms partition strides for generated WHERE clause
expressions that are used to split the partition column evenly. When the input is less than 1 , the number is set to 1 . |
lowerBound |
Optional: The lower bound to use in the partition scheme. If not provided, this value is automatically inferred by Apache Beam for the supported types. |
upperBound |
Optional: The upper bound to use in the partition scheme. If not provided, this value is automatically inferred by Apache Beam for the supported types. |
disabledAlgorithms |
Optional: Comma separated algorithms to disable. If this value is set to none , no algorithm is disabled. Use with
caution, because the algorithms disabled by default are known to have either vulnerabilities or performance issues. For
example: SSLv3, RC4 . |
extraFilesToStage |
Optional: Comma separated Cloud Storage paths or Secret Manager secrets for files to stage in the worker. These files are
saved in the /extra_files directory in each worker. For example:
gs://your-bucket/file.txt,projects/project-id/secrets/secret-id/versions/version-id . |
useStorageWriteApi |
Optional:
If true , the pipeline uses the
BigQuery Storage Write API. The default value is false . For more information, see
Using the Storage Write API.
|
useStorageWriteApiAtLeastOnce |
Optional:
When using the Storage Write API, specifies the write semantics. To use
at-least-once semantics, set this parameter to true . To use exactly-once semantics,
set the parameter to false . This parameter applies only when
useStorageWriteApi is true . The default value is false .
|
Run the template
Console
- Go to the Dataflow Create job from template page. Go to Create job from template
- In the Job name field, enter a unique job name.
- Optional: For Regional endpoint, select a value from the drop-down menu. The default
regional endpoint is
us-central1
.For a list of regions where you can run a Dataflow job, see Dataflow locations.
- From the Dataflow template drop-down menu, select the JDBC to BigQuery template.
- In the provided parameter fields, enter your parameter values.
- Click Run job.
gcloud
In your shell or terminal, run the template:
gcloud dataflow flex-template run JOB_NAME \ --template-file-gcs-location=gs://dataflow-templates-REGION_NAME/VERSION/flex/Jdbc_to_BigQuery_Flex \ --project=PROJECT_ID \ --region=REGION_NAME \ --parameters \ driverJars=DRIVER_JARS,\ driverClassName=DRIVER_CLASS_NAME,\ connectionURL=CONNECTION_URL,\ outputTable=OUTPUT_TABLE,\ bigQueryLoadingTemporaryDirectory=BIG_QUERY_LOADING_TEMPORARY_DIRECTORY,\
Replace the following:
JOB_NAME
: a unique job name of your choiceVERSION
: the version of the template that you want to useYou can use the following values:
latest
to use the latest version of the template, which is available in the non-dated parent folder in the bucket— gs://dataflow-templates-REGION_NAME/latest/- the version name, like
2023-09-12-00_RC00
, to use a specific version of the template, which can be found nested in the respective dated parent folder in the bucket— gs://dataflow-templates-REGION_NAME/
REGION_NAME
: the regional endpoint where you want to deploy your Dataflow job—for example,us-central1
DRIVER_JARS
: the comma-separated Cloud Storage path(s) of the JDBC driver(s)DRIVER_CLASS_NAME
: the JDBC driver class nameCONNECTION_URL
: the JDBC connection URL string.OUTPUT_TABLE
: the BigQuery output tableBIG_QUERY_LOADING_TEMPORARY_DIRECTORY
: the Temporary directory for BigQuery loading process
API
To run the template using the REST API, send an HTTP POST request. For more information on the
API and its authorization scopes, see
projects.templates.launch
.
POST https://dataflow.googleapis.com/v1b3/projects/PROJECT_ID/locations/LOCATION/flexTemplates:launch { "launchParameter": { "jobName": "JOB_NAME", "parameters": { "driverJars": "DRIVER_JARS", "driverClassName": "DRIVER_CLASS_NAME", "connectionURL": "CONNECTION_URL", "outputTable": "OUTPUT_TABLE", "bigQueryLoadingTemporaryDirectory": "BIG_QUERY_LOADING_TEMPORARY_DIRECTORY", }, "containerSpecGcsPath": "gs://dataflow-templates-LOCATION/VERSION/flex/Jdbc_to_BigQuery_Flex", "environment": { "maxWorkers": "10" } } }
Replace the following:
PROJECT_ID
: the Google Cloud project ID where you want to run the Dataflow jobJOB_NAME
: a unique job name of your choiceVERSION
: the version of the template that you want to useYou can use the following values:
latest
to use the latest version of the template, which is available in the non-dated parent folder in the bucket— gs://dataflow-templates-REGION_NAME/latest/- the version name, like
2023-09-12-00_RC00
, to use a specific version of the template, which can be found nested in the respective dated parent folder in the bucket— gs://dataflow-templates-REGION_NAME/
LOCATION
: the regional endpoint where you want to deploy your Dataflow job—for example,us-central1
DRIVER_JARS
: the comma-separated Cloud Storage path(s) of the JDBC driver(s)DRIVER_CLASS_NAME
: the JDBC driver class nameCONNECTION_URL
: the JDBC connection URL string.OUTPUT_TABLE
: the BigQuery output tableBIG_QUERY_LOADING_TEMPORARY_DIRECTORY
: the Temporary directory for BigQuery loading process
What's next
- Learn about Dataflow templates.
- See the list of Google-provided templates.