Java Database Connectivity (JDBC) to BigQuery template

The JDBC to BigQuery template is a batch pipeline that copies data from a relational database table into an existing BigQuery table. This pipeline uses JDBC to connect to the relational database. You can use this template to copy data from any relational database with available JDBC drivers into BigQuery. For an extra layer of protection, you can also pass in a Cloud KMS key along with a Base64-encoded username, password, and connection string parameters encrypted with the Cloud KMS key. See the Cloud KMS API encryption endpoint for additional details on encrypting your username, password, and connection string parameters.

Pipeline requirements

  • The JDBC drivers for the relational database must be available.
  • The BigQuery table must exist before pipeline execution.
  • The BigQuery table must have a compatible schema.
  • The relational database must be accessible from the subnet where Dataflow runs.

Template parameters

Parameter Description
driverJars The comma-separated list of driver JAR files. For example, gs://<my-bucket>/driver_jar1.jar,gs://<my-bucket>/driver_jar2.jar.
driverClassName The JDBC driver class name. For example, com.mysql.jdbc.Driver.
connectionURL The JDBC connection URL string. For example, jdbc:mysql://some-host:3306/sampledb. Can be passed in as a string that's Base64-encoded and then encrypted with a Cloud KMS key. Note the difference between an Oracle non-RAC database connection string (jdbc:oracle:thin:@some-host:<port>:<sid>) and an Oracle RAC database connection string (jdbc:oracle:thin:@//some-host[:<port>]/<service_name>).
query The query to be run on the source to extract the data. For example, select * from sampledb.sample_table.
outputTable The BigQuery output table location, in the format of <my-project>:<my-dataset>.<my-table>.
bigQueryLoadingTemporaryDirectory The temporary directory for the BigQuery loading process. For example, gs://<my-bucket>/my-files/temp_dir.
connectionProperties (Optional) Properties string to use for the JDBC connection. Format of the string must be [propertyName=property;]*. For example, unicode=true;characterEncoding=UTF-8.
username (Optional) The username to be used for the JDBC connection. Can be passed in as a Base64-encoded string encrypted with a Cloud KMS key.
password (Optional) The password to be used for the JDBC connection. Can be passed in as a Base64-encoded string encrypted with a Cloud KMS key.
KMSEncryptionKey (Optional) Cloud KMS Encryption Key to decrypt the username, password, and connection string. If Cloud KMS key is passed in, the username, password, and connection string must all be passed in encrypted.
disabledAlgorithms (Optional) Comma separated algorithms to disable. If this value is set to none then no algorithm is disabled. Use with care, as the algorithms disabled by default are known to have either vulnerabilities or performance issues. For example: SSLv3, RC4.
extraFilesToStage Comma separated Cloud Storage paths or Secret Manager secrets for files to stage in the worker. These files will be saved under the /extra_files directory in each worker. For example, gs://<my-bucket>/file.txt,projects/<project-id>/secrets/<secret-id>/versions/<version-id>.

Run the template

Console

  1. Go to the Dataflow Create job from template page.
  2. Go to Create job from template
  3. In the Job name field, enter a unique job name.
  4. Optional: For Regional endpoint, select a value from the drop-down menu. The default regional endpoint is us-central1.

    For a list of regions where you can run a Dataflow job, see Dataflow locations.

  5. From the Dataflow template drop-down menu, select the JDBC to BigQuery template.
  6. In the provided parameter fields, enter your parameter values.
  7. Click Run job.

gcloud

In your shell or terminal, run the template:

gcloud dataflow jobs run JOB_NAME \
    --gcs-location gs://dataflow-templates-REGION_NAME/VERSION/Jdbc_to_BigQuery \
    --region REGION_NAME \
    --parameters \
driverJars=DRIVER_PATHS,\
driverClassName=DRIVER_CLASS_NAME,\
connectionURL=JDBC_CONNECTION_URL,\
query=SOURCE_SQL_QUERY,\
outputTable=PROJECT_ID:DATASET.TABLE_NAME,
bigQueryLoadingTemporaryDirectory=PATH_TO_TEMP_DIR_ON_GCS,\
connectionProperties=CONNECTION_PROPERTIES,\
username=CONNECTION_USERNAME,\
password=CONNECTION_PASSWORD,\
KMSEncryptionKey=KMS_ENCRYPTION_KEY

Replace the following:

  • JOB_NAME: a unique job name of your choice
  • VERSION: the version of the template that you want to use

    You can use the following values:

  • REGION_NAME: the regional endpoint where you want to deploy your Dataflow job—for example, us-central1
  • DRIVER_PATHS: the comma-separated Cloud Storage path(s) of the JDBC driver(s)
  • DRIVER_CLASS_NAME: the drive class name
  • JDBC_CONNECTION_URL: the JDBC connection URL
  • SOURCE_SQL_QUERY: the SQL query to be run on the source database
  • DATASET: your BigQuery dataset, and replace TABLE_NAME: your BigQuery table name
  • PATH_TO_TEMP_DIR_ON_GCS: your Cloud Storage path to the temp directory
  • CONNECTION_PROPERTIES: the JDBC connection properties, if necessary
  • CONNECTION_USERNAME: the JDBC connection username
  • CONNECTION_PASSWORD: the JDBC connection password
  • KMS_ENCRYPTION_KEY: the Cloud KMS Encryption Key

API

To run the template using the REST API, send an HTTP POST request. For more information on the API and its authorization scopes, see projects.templates.launch.

POST https://dataflow.googleapis.com/v1b3/projects/PROJECT_ID/locations/LOCATION/templates:launch?gcsPath=gs://dataflow-templates-LOCATION/VERSION/Jdbc_to_BigQuery
{
   "jobName": "JOB_NAME",
   "parameters": {
       "driverJars": "DRIVER_PATHS",
       "driverClassName": "DRIVER_CLASS_NAME",
       "connectionURL": "JDBC_CONNECTION_URL",
       "query": "SOURCE_SQL_QUERY",
       "outputTable": "PROJECT_ID:DATASET.TABLE_NAME",
       "bigQueryLoadingTemporaryDirectory": "PATH_TO_TEMP_DIR_ON_GCS",
       "connectionProperties": "CONNECTION_PROPERTIES",
       "username": "CONNECTION_USERNAME",
       "password": "CONNECTION_PASSWORD",
       "KMSEncryptionKey":"KMS_ENCRYPTION_KEY"
   },
   "environment": { "zone": "us-central1-f" }
}

Replace the following:

  • PROJECT_ID: the Google Cloud project ID where you want to run the Dataflow job
  • JOB_NAME: a unique job name of your choice
  • VERSION: the version of the template that you want to use

    You can use the following values:

  • LOCATION: the regional endpoint where you want to deploy your Dataflow job—for example, us-central1
  • DRIVER_PATHS: the comma-separated Cloud Storage path(s) of the JDBC driver(s)
  • DRIVER_CLASS_NAME: the drive class name
  • JDBC_CONNECTION_URL: the JDBC connection URL
  • SOURCE_SQL_QUERY: the SQL query to be run on the source database
  • DATASET: your BigQuery dataset, and replace TABLE_NAME: your BigQuery table name
  • PATH_TO_TEMP_DIR_ON_GCS: your Cloud Storage path to the temp directory
  • CONNECTION_PROPERTIES: the JDBC connection properties, if necessary
  • CONNECTION_USERNAME: the JDBC connection username
  • CONNECTION_PASSWORD: the JDBC connection password
  • KMS_ENCRYPTION_KEY: the Cloud KMS Encryption Key