File Format Conversion (Avro, Parquet, CSV) template

The File Format Conversion template is a batch pipeline that converts files stored on Cloud Storage from one supported format to another.

The following format conversions are supported:

  • CSV to Avro
  • CSV to Parquet
  • Avro to Parquet
  • Parquet to Avro

Pipeline requirements

  • The output Cloud Storage bucket must exist before running the pipeline.

Template parameters

Parameter Description
inputFileFormat The input file format. Must be one of [csv, avro, parquet].
outputFileFormat The output file format. Must be one of [avro, parquet].
inputFileSpec The Cloud Storage path pattern for input files. For example, gs://bucket-name/path/*.csv
outputBucket The Cloud Storage folder to write output files. This path must end with a slash. For example, gs://bucket-name/output/
schema The Cloud Storage path to the Avro schema file. For example, gs://bucket-name/schema/my-schema.avsc
containsHeaders (Optional) The input CSV files contain a header record (true/false). The default value is false. Only required when reading CSV files.
csvFormat (Optional) The CSV format specification to use for parsing records. The default value is Default. See Apache Commons CSV Format for more details.
delimiter (Optional) The field delimiter used by the input CSV files.
outputFilePrefix (Optional) The output file prefix. The default value is output.
numShards (Optional) The number of output file shards.

Run the template

Console

  1. Go to the Dataflow Create job from template page.
  2. Go to Create job from template
  3. In the Job name field, enter a unique job name.
  4. Optional: For Regional endpoint, select a value from the drop-down menu. The default region is us-central1.

    For a list of regions where you can run a Dataflow job, see Dataflow locations.

  5. From the Dataflow template drop-down menu, select the Convert file formats template.
  6. In the provided parameter fields, enter your parameter values.
  7. Click Run job.

gcloud

In your shell or terminal, run the template:

gcloud dataflow flex-template run JOB_NAME \
    --project=PROJECT_ID \
    --region=REGION_NAME \
    --template-file-gcs-location=gs://dataflow-templates-REGION_NAME/VERSION/flex/File_Format_Conversion \
    --parameters \
inputFileFormat=INPUT_FORMAT,\
outputFileFormat=OUTPUT_FORMAT,\
inputFileSpec=INPUT_FILES,\
schema=SCHEMA,\
outputBucket=OUTPUT_FOLDER

Replace the following:

  • PROJECT_ID: the Google Cloud project ID where you want to run the Dataflow job
  • JOB_NAME: a unique job name of your choice
  • REGION_NAME: the region where you want to deploy your Dataflow job—for example, us-central1
  • VERSION: the version of the template that you want to use

    You can use the following values:

  • INPUT_FORMAT: the file format of the input file; must be one of [csv, avro, parquet]
  • OUTPUT_FORMAT: the file format of the output files; must be one of [avro, parquet]
  • INPUT_FILES: the path pattern for input files
  • OUTPUT_FOLDER: your Cloud Storage folder for output files
  • SCHEMA: the path to the Avro schema file

API

To run the template using the REST API, send an HTTP POST request. For more information on the API and its authorization scopes, see projects.templates.launch.

POST https://dataflow.googleapis.com/v1b3/projects/PROJECT_ID/locations/LOCATION/flexTemplates:launch
{
   "launch_parameter": {
      "jobName": "JOB_NAME",
      "parameters": {
          "inputFileFormat": "INPUT_FORMAT",
          "outputFileFormat": "OUTPUT_FORMAT",
          "inputFileSpec": "INPUT_FILES",
          "schema": "SCHEMA",
          "outputBucket": "OUTPUT_FOLDER"
      },
      "containerSpecGcsPath": "gs://dataflow-templates-LOCATION/VERSION/flex/File_Format_Conversion",
   }
}

Replace the following:

  • PROJECT_ID: the Google Cloud project ID where you want to run the Dataflow job
  • JOB_NAME: a unique job name of your choice
  • LOCATION: the region where you want to deploy your Dataflow job—for example, us-central1
  • VERSION: the version of the template that you want to use

    You can use the following values:

  • INPUT_FORMAT: the file format of the input file; must be one of [csv, avro, parquet]
  • OUTPUT_FORMAT: the file format of the output files; must be one of [avro, parquet]
  • INPUT_FILES: the path pattern for input files
  • OUTPUT_FOLDER: your Cloud Storage folder for output files
  • SCHEMA: the path to the Avro schema file

What's next