Loading Avro data from Cloud Storage

Loading Avro files from Cloud Storage

Avro is an open source data format that bundles serialized data with the data's schema in the same file.

When you load Avro data from Cloud Storage, you can load the data into a new table or partition, or you can append to or overwrite an existing table or partition. When your data is loaded into BigQuery, it is converted into columnar format for Capacitor (BigQuery's storage format).

When you load data from Cloud Storage into a BigQuery table, the dataset that contains the table must be in the same regional or multi- regional location as the Cloud Storage bucket.

For information about loading Avro data from a local file, see Loading data into BigQuery from a local data source.

Advantages of Avro

Avro is the preferred format for loading data into BigQuery. Loading Avro files has the following advantages over CSV and JSON (newline delimited):

  • The Avro binary format:
    • Is faster to load. The data can be read in parallel, even if the data blocks are compressed.
    • Doesn't require typing or serialization.
    • Is easier to parse because there are no encoding issues found in other formats such as ASCII.
  • When you load Avro files into BigQuery, the table schema is automatically retrieved from the self-describing source data.

Avro schemas

When you load Avro files into BigQuery, the table schema is automatically retrieved using the source data. When BigQuery retrieves the schema from the source data, the alphabetically last file is used.

For example, you have the following Avro files in Cloud Storage:

gs://mybucket/00/
  a.avro
  z.avro
gs://mybucket/01/
  b.avro

This command loads all of the files in a single CLI command (as a comma-separated list), and the schema is derived from mybucket/01/b.avro:

bq load \
--source_format=AVRO \
dataset.table \
"gs://mybucket/00/*.avro","gs://mybucket/01/*.avro"

When importing multiple Avro files with different Avro schemas, all schemas must be compatible with Avro's schema resolution.

When BigQuery detects the schema, some Avro data types are converted to BigQuery data types to make them compatible with BigQuery SQL syntax. For more information, see Avro conversions.

Avro compression

Compressed Avro files are not supported, but compressed data blocks are. BigQuery supports the DEFLATE and Snappy codecs.

Required permissions

When you load data into BigQuery, you need permissions to run a load job and permissions that allow you to load data into new or existing BigQuery tables and partitions. If you are loading data from Cloud Storage, you also need permissions to access to the bucket that contains your data.

BigQuery permissions

At a minimum, the following permissions are required to load data into BigQuery. These permissions are required if you are loading data into a new table or partition, or if you are appending or overwriting a table or partition.

  • bigquery.tables.create
  • bigquery.tables.updateData
  • bigquery.jobs.create

The following predefined Cloud IAM roles include both bigquery.tables.create and bigquery.tables.updateData permissions:

  • bigquery.dataEditor
  • bigquery.dataOwner
  • bigquery.admin

The following predefined Cloud IAM roles include bigquery.jobs.create permissions:

  • bigquery.user
  • bigquery.jobUser
  • bigquery.admin

In addition, if a user has bigquery.datasets.create permissions, when that user creates a dataset, they are granted bigquery.dataOwner access to it. bigquery.dataOwner access gives the user the ability to create and update tables in the dataset via a load job.

For more information on Cloud IAM roles and permissions in BigQuery, see Access control.

Cloud Storage permissions

In order to load data from a Cloud Storage bucket, you must be granted storage.objects.get permissions. If you are using a URI wildcard, you must also have storage.objects.list permissions.

The predefined Cloud IAM role storage.objectViewer can be granted to provide both storage.objects.get and storage.objects.list permissions.

Loading Avro data into a new table

You can load Avro data into a new table by:

  • Using the GCP Console or the classic web UI
  • Using the CLI's bq load command
  • Calling the jobs.insert API method and configuring a load job
  • Using the client libraries

To load Avro data from Cloud Storage into a new BigQuery table:

Console

  1. Open the BigQuery web UI in the GCP Console.
    Go to the GCP Console

  2. In the navigation panel, in the Resources section, expand your project and select a dataset.

  3. On the right side of the window, in the details panel, click Create table. The process for loading data is the same as the process for creating an empty table.

    Create table

  4. On the Create table page, in the Source section:

    • For Create table from, select Cloud Storage.

    • In the source field, browse to or enter the Cloud Storage URI. Note that you cannot include multiple URIs in the GCP Console, but wildcards are supported. The Cloud Storage bucket must be in the same location as the dataset that contains the table you're creating.

      Select file

    • For File format, select Avro.

  5. On the Create table page, in the Destination section:

    • For Dataset name, choose the appropriate dataset.

      View dataset

    • Verify that Table type is set to Native table.

    • In the Table name field, enter the name of the table you're creating in BigQuery.

  6. In the Schema section, no action is necessary. The schema is self-described in Avro files.

  7. (Optional) To partition the table, choose your options in the Partition and cluster settings:

    • To create a partitioned table, click No partitioning, select Partition by field and choose a DATE or TIMESTAMP column. This option is unavailable if your schema does not include a DATE or TIMESTAMP column.
    • To create an ingestion-time partitioned table, click No partitioning and select Partition by ingestion time.
  8. (Optional) For Partitioning filter, click the Require partition filter box to require users to include a WHERE clause that specifies the partitions to query. Requiring a partition filter may reduce cost and improve performance. For more information, see Querying partitioned tables. This option is unavailable if No partitioning is selected.

  9. (Optional) To cluster the table, in the Clustering order box, enter between one and four field names. Currently, clustering is supported only for partitioned tables.

  10. (Optional) Click Advanced options.

    • For Write preference, leave Write if empty selected. This option creates a new table and loads your data into it.
    • For Number of errors allowed, accept the default value of 0 or enter the maximum number of rows containing errors that can be ignored. If the number of rows with errors exceeds this value, the job will result in an invalid message and fail.
    • For Unknown values, leave Ignore unknown values unchecked. This option applies only to CSV and JSON files.
    • For Encryption, click Customer-managed key to use a Cloud Key Management Service key. If you leave the Google-managed key setting, BigQuery encrypts the data at rest.
  11. Click Create table.

Classic UI

  1. Go to the BigQuery web UI.
    Go to the BigQuery web UI

  2. In the navigation panel, hover on a dataset, click the down arrow icon down arrow icon image, and click Create new table. The process for loading data is the same as the process for creating an empty table.

  3. On the Create Table page, in the Source Data section:

    • Click Create from source.
    • For Location, select Cloud Storage and in the source field, enter the Cloud Storage URI. Note that you cannot include multiple URIs in the BigQuery web UI, but wildcards are supported. The Cloud Storage bucket must be in the same location as the dataset that contains the table you're creating.
    • For File format, select Avro.
  4. In the Destination Table section:

    • For Table name, choose the appropriate dataset, and in the table name field, enter the name of the table you're creating in BigQuery.
    • Verify that Table type is set to Native table.
  5. In the Schema section, no action is necessary. The schema is self-described in Avro files.

  6. (Optional) In the Options section:

    • For Number of errors allowed, accept the default value of 0 or enter the maximum number of rows containing errors that can be ignored. If the number of rows with errors exceeds this value, the job will result in an invalid message and fail.
    • For Write preference, leave Write if empty selected. This option creates a new table and loads your data into it.
    • To partition the table:
      • For Partitioning Type, click None and choose Day.
      • For Partitioning Field:
      • To create a partitioned table, choose a DATE or TIMESTAMP column. This option is unavailable if your schema does not include a DATE or TIMESTAMP column.
      • To create an ingestion-time partitioned table, leave the default value: _PARTITIONTIME.
      • Click the Require partition filter box to require users to include a WHERE clause that specifies the partitions to query. Requiring a partition filter may reduce cost and improve performance. For more information, see Querying partitioned tables. This option is unavailable if Partitioning type is set to None.
    • To cluster the table, in the Clustering fields box, enter between one and four field names.
    • For Destination encryption, choose Customer-managed encryption to use a Cloud Key Management Service key to encrypt the table. If you leave the Default setting, BigQuery encrypts the data at rest using a Google-managed key.
  7. Click Create Table.

CLI

Use the bq load command, specify AVRO using the --source_format flag, and include a Cloud Storage URI. You can include a single URI, a comma-separated list of URIs, or a URI containing a wildcard.

(Optional) Supply the --location flag and set the value to your location.

Other optional flags include:

  • --max_bad_records: An integer that specifies the maximum number of bad records allowed before the entire job fails. The default value is 0. At most, five errors of any type are returned regardless of the --max_bad_records value.
  • --time_partitioning_type: Enables time-based partitioning on a table and sets the partition type. Currently, the only possible value is DAY which generates one partition per day. This flag is optional when you create a table partitioned on a DATE or TIMESTAMP column.
  • --time_partitioning_expiration: An integer that specifies (in seconds) when a time-based partition should be deleted. The expiration time evaluates to the partition's UTC date plus the integer value.
  • --time_partitioning_field: The DATE or TIMESTAMP column used to create a partitioned table. If time-based partitioning is enabled without this value, an ingestion-time partitioned table is created.
  • --require_partition_filter: When enabled, this option requires users to include a WHERE clause that specifies the partitions to query. Requiring a partition filter may reduce cost and improve performance. For more information, see Querying partitioned tables.
  • --clustering_fields: A comma-separated list of up to four column names used to create a clustered table. This flag can only be used with partitioned tables.
  • --destination_kms_key: The Cloud KMS key for encryption of the table data.

    For more information on partitioned tables, see:

    For more information on clustered tables, see:

    For more information on table encryption, see:

To load Avro data into BigQuery, enter the following command:

bq --location=location load \
--source_format=format \
dataset.table \
path_to_source

Where:

  • location is your location. The --location flag is optional. For example, if you are using BigQuery in the Tokyo region, you can set the flag's value to asia-northeast1. You can set a default value for the location using the .bigqueryrc file.
  • format is AVRO.
  • dataset is an existing dataset.
  • table is the name of the table into which you're loading data.
  • path_to_source is a fully-qualified Cloud Storage URI or a comma-separated list of URIs. Wildcards are also supported.

Examples:

The following command loads data from gs://mybucket/mydata.avro into a table named mytable in mydataset.

    bq load \
    --source_format=AVRO \
    mydataset.mytable \
    gs://mybucket/mydata.avro

The following command loads data from gs://mybucket/mydata.avro into an ingestion-time partitioned table named mytable in mydataset.

    bq load \
    --source_format=AVRO \
    --time_partitioning_type=DAY \
    mydataset.mytable \
    gs://mybucket/mydata.avro

The following command loads data from gs://mybucket/mydata.avro into a partitioned table named mytable in mydataset. The table is partitioned on the mytimestamp column.

    bq load \
    --source_format=AVRO \
    --time_partitioning_field mytimestamp \
    mydataset.mytable \
    gs://mybucket/mydata.avro

The following command loads data from multiple files in gs://mybucket/ into a table named mytable in mydataset. The Cloud Storage URI uses a wildcard.

    bq load \
    --source_format=AVRO \
    mydataset.mytable \
    gs://mybucket/mydata*.avro

The following command loads data from multiple files in gs://mybucket/ into a table named mytable in mydataset. The command includes a comma- separated list of Cloud Storage URIs with wildcards.

    bq load \
    --source_format=AVRO \
    mydataset.mytable \
    "gs://mybucket/00/*.avro","gs://mybucket/01/*.avro"

API

  1. Create a load job that points to the source data in Cloud Storage.

  2. (Optional) Specify your location in the location property in the jobReference section of the job resource.

  3. The source URIs property must be fully-qualified, in the format gs://bucket/object. Each URI can contain one '*' wildcard character.

  4. Specify the Avro data format by setting the sourceFormat property to AVRO.

  5. To check the job status, call jobs.get(job_id*), where job_id is the ID of the job returned by the initial request.

    • If status.state = DONE, the job completed successfully.
    • If the status.errorResult property is present, the request failed, and that object will include information describing what went wrong. When a request fails, no table is created and no data is loaded.
    • If status.errorResult is absent, the job finished successfully, although there might have been some non-fatal errors, such as problems importing a few rows. Non-fatal errors are listed in the returned job object's status.errors property.

API notes:

  • Load jobs are atomic and consistent; if a load job fails, none of the data is available, and if a load job succeeds, all of the data is available.

  • As a best practice, generate a unique ID and pass it as jobReference.jobId when calling jobs.insert to create a load job. This approach is more robust to network failure because the client can poll or retry on the known job ID.

  • Calling jobs.insert on a given job ID is idempotent. You can retry as many times as you like on the same job ID, and at most one of those operations will succeed.

Python

Before trying this sample, follow the Python setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Python API reference documentation .

# from google.cloud import bigquery
# client = bigquery.Client()
# dataset_id = 'my_dataset'

dataset_ref = client.dataset(dataset_id)
job_config = bigquery.LoadJobConfig()
job_config.source_format = bigquery.SourceFormat.AVRO
uri = "gs://cloud-samples-data/bigquery/us-states/us-states.avro"

load_job = client.load_table_from_uri(
    uri, dataset_ref.table("us_states"), job_config=job_config
)  # API request
print("Starting job {}".format(load_job.job_id))

load_job.result()  # Waits for table load to complete.
print("Job finished.")

destination_table = client.get_table(dataset_ref.table("us_states"))
print("Loaded {} rows.".format(destination_table.num_rows))

Appending to or overwriting a table with Avro data

You can load additional data into a table either from source files or by appending query results.

In the console and the classic BigQuery web UI, you use the Write preference option to specify what action to take when you load data from a source file or from a query result.

You have the following options when you load additional data into a table:

Console option Classic web UI option CLI flag BigQuery API property Description
Write if empty Write if empty None WRITE_EMPTY Writes the data only if the table is empty.
Append to table Append to table --noreplace or --replace=false; if --[no]replace is unspecified, the default is append WRITE_APPEND (Default) Appends the data to the end of the table.
Overwrite table Overwrite table --replace or --replace=true WRITE_TRUNCATE Erases all existing data in a table before writing the new data.

If you load data into an existing table, the load job can append the data or overwrite the table.

You can append or overwrite a table by:

  • Using the GCP Console or the classic web UI
  • Using the CLI's bq load command
  • Calling the jobs.insert API method and configuring a load job
  • Using the client libraries

To append or overwrite a table with Avro data:

Console

  1. Open the BigQuery web UI in the GCP Console.
    Go to the GCP Console

  2. In the navigation panel, in the Resources section, expand your project and select a dataset.

  3. On the right side of the window, in the details panel, click Create table. The process for appending and overwriting data in a load job is the same as the process for creating a table in a load job.

    Create table

  4. On the Create table page, in the Source section:

    • For Create table from, select Cloud Storage.

    • In the source field, browse to or enter the Cloud Storage URI. Note that you cannot include multiple URIs in the BigQuery web UI, but wildcards are supported. The Cloud Storage bucket must be in the same location as the dataset that contains the table you're appending or overwriting.

      Select file

    • For File format, select Avro.

  5. On the Create table page, in the Destination section:

    • For Dataset name, choose the appropriate dataset.

      Select dataset

    • In the Table name field, enter the name of the table you're appending or overwriting in BigQuery.

    • Verify that Table type is set to Native table.

  6. In the Schema section, no action is necessary. The schema is self-described in Avro files.

  7. For Partition and cluster settings, leave the default values. You cannot convert a table to a partitioned or clustered table by appending or overwriting it, and the GCP Console does not support appending to or overwriting partitioned or clustered tables in a load job.

  8. Click Advanced options.

    • For Write preference, choose Append to table or Overwrite table.
    • For Number of errors allowed, accept the default value of 0 or enter the maximum number of rows containing errors that can be ignored. If the number of rows with errors exceeds this value, the job will result in an invalid message and fail.
    • For Unknown values, leave Ignore unknown values unchecked. This option applies only to CSV and JSON files.
    • For Encryption, click Customer-managed key to use a Cloud Key Management Service key. If you leave the Google-managed key setting, BigQuery encrypts the data at rest.

      Overwrite table

  9. Click Create table.

Classic UI

  1. Go to the BigQuery web UI.
    Go to the BigQuery web UI

  2. In the navigation panel, hover on a dataset, click the down arrow icon down arrow icon image, and click Create new table. The process for appending and overwriting data in a load job is the same as the process for creating a table in a load job.

  3. On the Create Table page, in the Source Data section:

    • For Location, select Cloud Storage and in the source field, enter the Cloud Storage URI. Note that you cannot include multiple URIs in the UI, but wildcards are supported. The Cloud Storage bucket must be in the same location as the dataset that contains the table you're appending or overwriting.
    • For File format, select Avro.
  4. On the Create Table page, in the Destination Table section:

    • For Table name, choose the appropriate dataset, and in the table name field, enter the name of the table you're appending or overwriting.
    • Verify that Table type is set to Native table.
  5. In the Schema section, no action is necessary. Schema information is self-described in Avro files.

  6. In the Options section:

    • For Number of errors allowed, accept the default value of 0 or enter the maximum number of rows containing errors that can be ignored. If the number of rows with errors exceeds this value, the job will result in an invalid message and fail.
    • For Write preference, choose Append to table or Overwrite table.
    • Leave the default values for Partitioning Type, Partitioning Field, Require partition filter, and Clustering Fields. You cannot convert a table to a partitioned or clustered table by appending or overwriting it, and the web UI does not support appending to or overwriting partitioned or clustered tables in a load job.
    • For Destination encryption, choose Customer-managed encryption to use a Cloud Key Management Service key to encrypt the table. If you leave the Default setting, BigQuery encrypts the data at rest using a Google-managed key.
  7. Click Create Table.

CLI

Enter the bq load command with the --replace flag to overwrite the table. Use the --noreplace flag to append data to the table. If no flag is specified, the default is to append data. Supply the --source_format flag and set it to AVRO. Because Avro schemas are automatically retrieved from the self-describing source data, you do not need to provide a schema definition.

(Optional) Supply the --location flag and set the value to your location.

Other optional flags include:

  • --max_bad_records: An integer that specifies the maximum number of bad records allowed before the entire job fails. The default value is 0. At most, five errors of any type are returned regardless of the --max_bad_records value.
  • --destination_kms_key: The Cloud KMS key for encryption of the table data.
bq --location=location load \
--[no]replace \
--source_format=format \
dataset.table \
path_to_source

Where:

  • location is your location. The --location flag is optional. You can set a default value for the location by using the .bigqueryrc file.
  • format is AVRO.
  • dataset is an existing dataset.
  • table is the name of the table into which you're loading data.
  • path_to_source is a fully-qualified Cloud Storage URI or a comma-separated list of URIs. Wildcards are also supported.

Examples:

The following command loads data from gs://mybucket/mydata.avro and overwrites a table named mytable in mydataset.

    bq load \
    --replace \
    --source_format=AVRO \
    mydataset.mytable \
    gs://mybucket/mydata.avro

The following command loads data from gs://mybucket/mydata.avro and appends data to a table named mytable in mydataset.

    bq load \
    --noreplace \
    --source_format=AVRO \
    mydataset.mytable \
    gs://mybucket/mydata.avro

For information on appending and overwriting partitioned tables using the CLI, see: Appending to and overwriting partitioned table data.

API

  1. Create a load job that points to the source data in Cloud Storage.

  2. (Optional) Specify your location in the location property in the jobReference section of the job resource.

  3. The source URIs property must be fully-qualified, in the format gs://bucket/object. You can include multiple URIs as a comma-separated list. Note that wildcards are also supported.

  4. Specify the data format by setting the configuration.load.sourceFormat property to AVRO.

  5. Specify the write preference by setting the configuration.load.writeDisposition property to WRITE_TRUNCATE or WRITE_APPEND.

Python

Before trying this sample, follow the Python setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Python API reference documentation .

# from google.cloud import bigquery
# client = bigquery.Client()
# table_ref = client.dataset('my_dataset').table('existing_table')

job_config = bigquery.LoadJobConfig()
job_config.write_disposition = bigquery.WriteDisposition.WRITE_TRUNCATE
job_config.source_format = bigquery.SourceFormat.AVRO
uri = "gs://cloud-samples-data/bigquery/us-states/us-states.avro"
load_job = client.load_table_from_uri(
    uri, table_ref, job_config=job_config
)  # API request
print("Starting job {}".format(load_job.job_id))

load_job.result()  # Waits for table load to complete.
print("Job finished.")

destination_table = client.get_table(table_ref)
print("Loaded {} rows.".format(destination_table.num_rows))

Avro conversions

BigQuery converts Avro data types to the following BigQuery data types:

Primitive types

Avro data type BigQuery data type Notes
null BigQuery ignores these values
boolean BOOLEAN
int INTEGER
long INTEGER
float FLOAT
double FLOAT
bytes BYTES
bytes with a decimal logical type NUMERIC
string STRING UTF-8 only

Complex types

Avro data type BigQuery data type Notes
record RECORD
  • Aliases are ignored
  • Doc is converted into a field description
  • Default values are set at read time
  • Order is ignored
  • Recursive fields are dropped — Only the first level of nesting is maintained for recursive fields
enum STRING
  • The string is the symbolic value of the enum
  • Aliases are ignored
  • Doc is converted into a field description
array repeated fields Arrays of arrays are not supported. Arrays containing only NULL types are ignored.
map<T> RECORD BigQuery converts an Avro map<T> field to a repeated RECORD that contains two fields: a key and a value. BigQuery stores the key as a STRING, and converts the value to its corresponding data type in BigQuery.
union
  • Nullable field
  • RECORD with a list of nullable fields
  • When union only has one non-null type, it converts to a nullable field.
  • Otherwise it converts to a RECORD with a list of nullable fields. Only one of these fields will be set at read time.
fixed BYTES
  • Aliases are ignored
  • Size is ignored

Logical types

By default, BigQuery ignores the logicalType attribute and uses the underlying Avro type instead.

Avro logical type BigQuery data type
date INTEGER
time-millis INTEGER
time-micros INTEGER (converted from LONG)
timestamp-millis INTEGER (converted from LONG)
timestamp-micros INTEGER (converted from LONG)
duration BYTES (converted from fixed type of size 12)
decimal NUMERIC (see Decimal logical type)

To enable the conversion of Avro logical types to their corresponding BigQuery data types, set the --use_avro_logical_types flag to True using the command-line tool, or set the useAvroLogicalTypes property in the job resource when you call the jobs.insert method to create a load job.

The table below shows the conversion of Avro logical types to BigQuery data types.

Avro logical type Converted BigQuery data type
date DATE
time-millis TIME
time-micros TIME
timestamp-millis TIMESTAMP
timestamp-micros TIMESTAMP
duration BYTES (converted from fixed type of size 12)
decimal NUMERIC (see Decimal logical type)

For more information on Avro data types, see the Apache Avro™ 1.8.2 Specification.

Decimal logical type

An Avro bytes type with a decimal logical type may have at most a precision of 38 (total number of digits) and at most a scale of 9 (digits to the right of the decimal). The number of integer digits, which is the precision minus the scale, may be at most 29. For example, a decimal with a precision of 38 and a scale of 9 is supported because the number of integer digits is 29. A decimal with a precision of 38 and a scale of 5 is not supported because the number of integer digits is 33.

When you load an Avro file containing a bytes column with the decimal logical type into an existing table, the column's data type in the table's schema definition may be either BYTES or NUMERIC. If the column's data type is BYTES, the decimal logical type on the column in the Avro file is ignored.

For more information on the Avro decimal logical type, see the Apache Avro™ 1.8.2 Specification.

Var denne siden nyttig? Si fra hva du synes:

Send tilbakemelding om ...

Trenger du hjelp? Gå til brukerstøttesiden vår.