Cloud Storage Avro to Spanner template

The Cloud Storage Avro files to Spanner template is a batch pipeline that reads Avro files exported from Spanner stored in Cloud Storage and imports them to a Spanner database.

Pipeline requirements

  • The target Spanner database must exist and must be empty.
  • You must have read permissions for the Cloud Storage bucket and write permissions for the target Spanner database.
  • The input Cloud Storage path must exist, and it must include a spanner-export.json file that contains a JSON description of files to import.
  • If the source Avro file doesn't contain a primary key, you must create an empty Spanner table with a primary key before you run the template. This step isn't required if the Avro file defines the primary key.

Template parameters

Parameter Description
instanceId The instance ID of the Spanner database.
databaseId The database ID of the Spanner database.
inputDir The Cloud Storage path where the Avro files are imported from.
spannerProjectId Optional: The Google Cloud project ID of the Spanner database. If not set, the default Google Cloud project is used.
spannerPriority Optional: The request priority for Spanner calls. Possible values are HIGH, MEDIUM, LOW. The default value is MEDIUM.
ddlCreationTimeoutInMinutes Optional: The timeout, in minutes, for DDL statements performed by the template. The default value is 30 minutes.
earlyIndexCreateFlag Optional: Specifies whether to enable early index creation. If the template runs a large number of DDL statements, it's more efficient to create indexes before loading data. Therefore, the default behavior is to create the indexes first when the number of DDL statements exceeds a threshold. To disable this feature, set earlyIndexCreateFlag to false. Default: true.
waitForChangeStreams Optional: If true, the pipeline waits for change streams to be created. If false, the job might complete while change streams are still being created in the background. Default: true.
waitForForeignKeys Optional: If true, the pipeline waits for foreign keys to be created. If false, the job might complete while foreign keys are still being created in the background. Default: false.
waitForIndexes Optional: If true, the pipeline waits for indexes to be created. If false, the job might complete while indexes are still being created in the background. Default: false.

Run the template

Console

  1. Go to the Dataflow Create job from template page.
  2. Go to Create job from template
  3. In the Job name field, enter a unique job name.

    For the job to show up in the Spanner Instances page of the Google Cloud console, the job name must match the following format:

    cloud-spanner-import-SPANNER_INSTANCE_ID-SPANNER_DATABASE_NAME

    Replace the following:

    • SPANNER_INSTANCE_ID: your Spanner instance's ID
    • SPANNER_DATABASE_NAME: your Spanner database's name
  4. Optional: For Regional endpoint, select a value from the drop-down menu. The default region is us-central1.

    For a list of regions where you can run a Dataflow job, see Dataflow locations.

  5. From the Dataflow template drop-down menu, select the Avro Files on Cloud Storage to Cloud Spanner template.
  6. In the provided parameter fields, enter your parameter values.
  7. Click Run job.

gcloud

In your shell or terminal, run the template:

gcloud dataflow jobs run JOB_NAME \
    --gcs-location gs://dataflow-templates-REGION_NAME/VERSION/GCS_Avro_to_Cloud_Spanner \
    --region REGION_NAME \
    --staging-location GCS_STAGING_LOCATION \
    --parameters \
instanceId=INSTANCE_ID,\
databaseId=DATABASE_ID,\
inputDir=GCS_DIRECTORY

Replace the following:

  • JOB_NAME: a unique job name of your choice
  • VERSION: the version of the template that you want to use

    You can use the following values:

  • REGION_NAME: the region where you want to deploy your Dataflow job—for example, us-central1
  • INSTANCE_ID: the ID of the Spanner instance that contains the database
  • DATABASE_ID: the ID of the Spanner database to import to
  • GCS_DIRECTORY: the Cloud Storage path where the Avro files are imported from, for example, gs://mybucket/somefolder

API

To run the template using the REST API, send an HTTP POST request. For more information on the API and its authorization scopes, see projects.templates.launch.

POST https://dataflow.googleapis.com/v1b3/projects/PROJECT_ID/locations/LOCATION/templates:launch?gcsPath=gs://dataflow-templates-LOCATION/VERSION/GCS_Avro_to_Cloud_Spanner
{
   "jobName": "JOB_NAME",
   "parameters": {
       "instanceId": "INSTANCE_ID",
       "databaseId": "DATABASE_ID",
       "inputDir": "gs://GCS_DIRECTORY"
   },
   "environment": {
       "machineType": "n1-standard-2"
   }
}

Replace the following:

  • PROJECT_ID: the Google Cloud project ID where you want to run the Dataflow job
  • JOB_NAME: a unique job name of your choice
  • VERSION: the version of the template that you want to use

    You can use the following values:

  • LOCATION: the region where you want to deploy your Dataflow job—for example, us-central1
  • INSTANCE_ID: the ID of the Spanner instance that contains the database
  • DATABASE_ID: the ID of the Spanner database to import to
  • GCS_DIRECTORY: the Cloud Storage path where the Avro files are imported from, for example, gs://mybucket/somefolder

What's next