The MySQL to BigQuery template is a batch pipeline that copies data from a MySQL table into an existing BigQuery table. This pipeline uses JDBC to connect to MySQL. For an extra layer of protection, you can also pass in a Cloud KMS key along with Base64-encoded username, password, and connection string parameters encrypted with the Cloud KMS key. For more information about encrypting your username, password, and connection string parameters, see the Cloud KMS API encryption endpoint.
Pipeline requirements
- The BigQuery table must exist before pipeline execution.
- The BigQuery table must have a compatible schema.
- The relational database must be accessible from the subnet where Dataflow runs.
Template parameters
Parameter | Description |
---|---|
connectionURL |
The JDBC connection URL string. For example, jdbc:mysql://some-host:3306/sampledb .
You can pass in this value as a string that's encrypted with a Cloud KMS key and then Base64-encoded.
Remove whitespace characters from the Base64-encoded string. |
outputTable |
The BigQuery output table location, in the format of <my-project>:<my-dataset>.<my-table> . |
bigQueryLoadingTemporaryDirectory |
The temporary directory for the BigQuery loading process.
For example, gs://<my-bucket>/my-files/temp_dir . |
query |
The query to run on the source to extract the data. For example, select * from sampledb.sample_table .
Required when not using partitions. |
table |
The table to extract the data from. This parameter also accepts a subquery in parentheses.
For example, Person or (select id, name from Person) as subq .
Required when using partitions. |
partitionColumn |
The name of a column to use for partitioning. Only numeric columns are supported. Required when using partitions. |
connectionProperties |
Optional: The properties string to use for the JDBC connection. The format of the string must be [propertyName=property;]* .
For example, unicode=true;characterEncoding=UTF-8 . For more information, see Configuration Properties in the MySQL documentation. |
username |
Optional: The username to use for the JDBC connection. You can pass in this value encrypted by a Cloud KMS key as a Base64-encoded string. |
password |
Optional: The password to use for the JDBC connection. You can pass in this value encrypted by a Cloud KMS key as a Base64-encoded string. |
KMSEncryptionKey |
Optional: The Cloud KMS encryption key to use to decrypt the username, password, and connection string. If you pass in a Cloud KMS key, you must also encrypt the username, password, and connection string. |
numPartitions |
Optional: The number of partitions to use. If not specified, a conservative number is assumed by the worker. |
disabledAlgorithms |
Optional: Comma separated algorithms to disable. If this value is set to none , no algorithm is disabled.
Use this parameter with caution, because the algorithms disabled by default might have vulnerabilities or performance issues.
For example: SSLv3, RC4. |
extraFilesToStage |
Comma separated Cloud Storage paths or Secret Manager secrets for files to stage in the worker.
These files are saved in the /extra_files directory in each worker.
For example, gs://<my-bucket>/file.txt,projects/<project-id>/secrets/<secret-id>/versions/<version-id> . |
Run the template
Console
- Go to the Dataflow Create job from template page. Go to Create job from template
- In the Job name field, enter a unique job name.
- Optional: For Regional endpoint, select a value from the drop-down menu. The default
region is
us-central1
.For a list of regions where you can run a Dataflow job, see Dataflow locations.
- From the Dataflow template drop-down menu, select the MySQL to BigQuery template.
- In the provided parameter fields, enter your parameter values.
- Click Run job.
gcloud
In your shell or terminal, run the template:
gcloud dataflow flex-template run JOB_NAME \ --project=PROJECT_ID \ --region=REGION_NAME \ --template-file-gcs-location=gs://dataflow-templates-REGION_NAME/VERSION/flex/MySQL_to_BigQuery \ --parameters \ connectionURL=JDBC_CONNECTION_URL,\ query=SOURCE_SQL_QUERY,\ outputTable=PROJECT_ID:DATASET.TABLE_NAME, bigQueryLoadingTemporaryDirectory=PATH_TO_TEMP_DIR_ON_GCS,\ connectionProperties=CONNECTION_PROPERTIES,\ username=CONNECTION_USERNAME,\ password=CONNECTION_PASSWORD,\ KMSEncryptionKey=KMS_ENCRYPTION_KEY
Replace the following:
JOB_NAME
: a unique job name of your choiceVERSION
: the version of the template that you want to useYou can use the following values:
latest
to use the latest version of the template, which is available in the non-dated parent folder in the bucket— gs://dataflow-templates-REGION_NAME/latest/- the version name, like
2023-09-12-00_RC00
, to use a specific version of the template, which can be found nested in the respective dated parent folder in the bucket— gs://dataflow-templates-REGION_NAME/
REGION_NAME
: the region where you want to deploy your Dataflow job—for example,us-central1
JDBC_CONNECTION_URL
: the JDBC connection URLSOURCE_SQL_QUERY
: the SQL query to run on the source databaseDATASET
: your BigQuery datasetTABLE_NAME
: your BigQuery table namePATH_TO_TEMP_DIR_ON_GCS
: your Cloud Storage path to the temp directoryCONNECTION_PROPERTIES
: the JDBC connection properties, if neededCONNECTION_USERNAME
: the JDBC connection usernameCONNECTION_PASSWORD
: the JDBC connection passwordKMS_ENCRYPTION_KEY
: the Cloud KMS encryption key
API
To run the template using the REST API, send an HTTP POST request. For more information on the
API and its authorization scopes, see
projects.templates.launch
.
POST https://dataflow.googleapis.com/v1b3/projects/PROJECT_ID/locations/LOCATION/flexTemplates:launch { "launchParameter": { "jobName": "JOB_NAME", "containerSpecGcsPath": "gs://dataflow-templates-LOCATION/VERSION/flex/MySQL_to_BigQuery" "parameters": { "connectionURL": "JDBC_CONNECTION_URL", "query": "SOURCE_SQL_QUERY", "outputTable": "PROJECT_ID:DATASET.TABLE_NAME", "bigQueryLoadingTemporaryDirectory": "PATH_TO_TEMP_DIR_ON_GCS", "connectionProperties": "CONNECTION_PROPERTIES", "username": "CONNECTION_USERNAME", "password": "CONNECTION_PASSWORD", "KMSEncryptionKey":"KMS_ENCRYPTION_KEY" }, "environment": { "zone": "us-central1-f" } } }
Replace the following:
PROJECT_ID
: the Google Cloud project ID where you want to run the Dataflow jobJOB_NAME
: a unique job name of your choiceVERSION
: the version of the template that you want to useYou can use the following values:
latest
to use the latest version of the template, which is available in the non-dated parent folder in the bucket— gs://dataflow-templates-REGION_NAME/latest/- the version name, like
2023-09-12-00_RC00
, to use a specific version of the template, which can be found nested in the respective dated parent folder in the bucket— gs://dataflow-templates-REGION_NAME/
LOCATION
: the region where you want to deploy your Dataflow job—for example,us-central1
JDBC_CONNECTION_URL
: the JDBC connection URLSOURCE_SQL_QUERY
: the SQL query to run on the source databaseDATASET
: your BigQuery datasetTABLE_NAME
: your BigQuery table namePATH_TO_TEMP_DIR_ON_GCS
: your Cloud Storage path to the temp directoryCONNECTION_PROPERTIES
: the JDBC connection properties, if neededCONNECTION_USERNAME
: the JDBC connection usernameCONNECTION_PASSWORD
: the JDBC connection passwordKMS_ENCRYPTION_KEY
: the Cloud KMS encryption key
What's next
- Learn about Dataflow templates.
- See the list of Google-provided templates.