Query Blob Storage data

This document describes how to query data stored in an Azure Blob Storage BigLake table.

Before you begin

Ensure that you have a Blob Storage BigLake table.

Required roles

To query Blob Storage BigLake tables, ensure that the caller of the BigQuery API has the following roles:

  • BigQuery Connection User (roles/bigquery.connectionUser)
  • BigQuery Data Viewer (roles/bigquery.dataViewer)
  • BigQuery User (roles/bigquery.user)

The caller can be your account or an Blob Storage connection service account. Depending on your permissions, you can grant these roles to yourself or ask your administrator to grant them to you. For more information about granting roles, see Viewing the grantable roles on resources.

To see the exact permissions that are required to query Blob Storage BigLake tables, expand the Required permissions section:

Required permissions

You might also be able to get these permissions with custom roles or other predefined roles.

Query Blob Storage BigLake tables

After creating a Blob Storage BigLake table, you can query it using GoogleSQL syntax, the same as if it were a standard BigQuery table.

The cached query results are stored in a BigQuery temporary table. To query a temporary BigLake table, see Query a temporary BigLake table. For more information about BigQuery Omni limitations and quotas, see limitations and quotas.

When creating a reservation in a BigQuery Omni region, use the Enterprise edition. To learn how to create a reservation with an edition, see Create reservations.

Run a query on the Blob Storage BigLake table:

  1. In the Google Cloud console, go to the BigQuery page.

    Go to BigQuery

  2. In the query editor, enter the following statement:

    SELECT * FROM DATASET_NAME.TABLE_NAME;

    Replace the following:

    • DATASET_NAME: the dataset name that you created
    • TABLE_NAME: the BigLake table that name you created

    • Click Run.

For more information about how to run queries, see Run an interactive query.

Query a temporary table

BigQuery creates temporary tables to store query results. To retrieve query result from temporary tables, you can use the Google Cloud console or the BigQuery API.

Select one of the following options:

Console

When you query a BigLake table that references external cloud data, you can view the query results displayed in the Google Cloud console.

API

To query a BigLake table using the API, follow these steps:

  1. Create a Job object.
  2. Call the jobs.insert method to run the query asynchronously or the jobs.query method to run the query synchronously, passing in the Job object.
  3. Read rows with the jobs.getQueryResults by passing the given job reference, and the tabledata.list methods by passing the given table reference of the query result.

Query the _FILE_NAME pseudocolumn

Tables based on external data sources provide a pseudocolumn named _FILE_NAME. This column contains the fully qualified path to the file to which the row belongs. This column is available only for tables that reference external data stored in Cloud Storage, Google Drive, Amazon S3, and Azure Blob Storage.

The _FILE_NAME column name is reserved, which means that you cannot create a column by that name in any of your tables. To select the value of _FILE_NAME, you must use an alias. The following example query demonstrates selecting _FILE_NAME by assigning the alias fn to the pseudocolumn.

  bq query \
  --project_id=PROJECT_ID \
  --use_legacy_sql=false \
  'SELECT
     name,
     _FILE_NAME AS fn
   FROM
     `DATASET.TABLE_NAME`
   WHERE
     name contains "Alex"' 

Replace the following:

  • PROJECT_ID is a valid project ID (this flag is not required if you use Cloud Shell or if you set a default project in the Google Cloud CLI)
  • DATASET is the name of the dataset that stores the permanent external table
  • TABLE_NAME is the name of the permanent external table

When the query has a filter predicate on the _FILE_NAME pseudocolumn, BigQuery attempts to skip reading files that do not satisfy the filter. Similar recommendations to querying ingestion-time partitioned tables using pseudocolumns apply when constructing query predicates with the _FILE_NAME pseudocolumn.

What's next