Create an Apache Iceberg table with metadata in BigQuery Metastore

This document shows you how to run Dataproc Serverless for Spark SQL and PySpark batch workloads to create an Apache Iceberg table with metadata stored in BigQuery Metastore. For information on other ways to run Spark code, see Run PySpark code in a BigQuery notebook and Run an Apache Spark workload

Before you begin

If you haven't already done so, create a Google Cloud project and a Cloud Storage bucket.

  1. Set up your project

    1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
    2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

      Go to project selector

    3. Enable the Dataproc, BigQuery, and Cloud Storage APIs.

      Enable the APIs

    4. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

      Go to project selector

    5. Enable the Dataproc, BigQuery, and Cloud Storage APIs.

      Enable the APIs

  2. Create a Cloud Storage bucket in your project.

    1. In the Google Cloud console, go to the Cloud Storage Buckets page.

      Go to Buckets page

    2. Click Create bucket.
    3. On the Create a bucket page, enter your bucket information. To go to the next step, click Continue.
      • For Name your bucket, enter a name that meets the bucket naming requirements.
      • For Choose where to store your data, do the following:
        • Select a Location type option.
        • Select a Location option.
      • For Choose a default storage class for your data, select a storage class.
      • For Choose how to control access to objects, select an Access control option.
      • For Advanced settings (optional), specify an encryption method, a retention policy, or bucket labels.
    4. Click Create.

  3. Grant the BigQuery Data Editor (roles/bigquery.dataEditor) role to the Compute Engine default service account, PROJECT_NUMBER-compute@developer.gserviceaccount.com. For instructions, see Grant a single role.

    Google Cloud CLI example:

    gcloud projects add-iam-policy-binding PROJECT_ID \
        --member PROJECT_NUMBER-compute@developer.gserviceaccount.com \
    --role roles/bigquery.dataEditor
    

    Notes:

    • PROJECT_ID and PROJECT_NUMBER are listed in the Project info section in the Google Cloud console Dashboard.
  4. Make sure the regional VPC subnet where you will run your Dataproc Serverless batch workload has Private Google Access enabled. For more information, see Create an Iceberg table.

OSS resources to BigQuery resources mapping

Note the following mapping between open source resource and BigQuery resource terms:

OSS resource BigQuery resource
Namespace, Database Dataset
Partitioned or Unpartitioned Table Table
View View

Create an Iceberg table

This section shows you how to create an Iceberg table with metadata in BigQuery Metastore using Dataproc Serverless Spark SQL and PySpark batch workloads.

Spark SQL

Run a Spark SQL workload to create an Iceberg table

The following steps show you how to run a Dataproc Serverless Spark SQL batch workload to create an Iceberg table with table metadata stored in BigQuery Metastore.

  1. Copy the following Spark SQL commands locally or in Cloud Shell into an iceberg-table.sql file.

    USE CATALOG_NAME;
    CREATE NAMESPACE IF NOT EXISTS example_namespace;
    DROP TABLE IF EXISTS example_table;
    CREATE TABLE example_table (id int, data string) USING ICEBERG LOCATION 'gs://BUCKET/WAREHOUSE_FOLDER';
    INSERT INTO example_table VALUES (1, 'first row');
    ALTER TABLE example_table ADD COLUMNS (newDoubleCol double);
    DESCRIBE TABLE example_table;
    

    Replace the following:

    • CATALOG_NAME: Iceberg catalog name.
    • BUCKET and WAREHOUSE_FOLDER: Cloud Storage bucket and folder used as Iceberg warehouse directory.
  2. Run the following command locally or in Cloud Shell from the directory containing iceberg-table.sql to submit the Spark SQL workload.

    gcloud dataproc batches submit spark-sql iceberg-table.sql \
        --project=PROJECT_ID \
        --region=REGION \
        --deps-bucket=BUCKET_NAME \
        --version=2.2 \
        --subnet=projects/PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME \
        --properties="spark.sql.catalog.CATALOG_NAME=org.apache.iceberg.spark.SparkCatalog,spark.sql.catalog.CATALOG_NAME.catalog-impl=org.apache.iceberg.gcp.bigquery.BigQueryMetastoreCatalog,spark.sql.catalog.CATALOG_NAME.gcp_project=PROJECT_ID,spark.sql.catalog.CATALOG_NAME.gcp_location=LOCATION,spark.sql.catalog.CATALOG_NAME.warehouse=gs://BUCKET/WAREHOUSE_FOLDER"
      

    Notes:

    • PROJECT_ID: Your Google Cloud project ID. Project IDs are listed in the Project info section on the Google Cloud console Dashboard.
    • REGION: An available Compute Engine region to run the workload.
    • BUCKET_NAME: The name of your Cloud Storage bucket. Spark uploads workload dependencies to a /dependencies folder in this bucket before running the batch workload. The WAREHOUSE_FOLDER is located in this bucket.
    • SUBNET_NAME: The name of a VPC subnet in the REGION that has Private Google Access enabled and meets other session subnet requirements.
    • LOCATION: A supported BigQuery location. The default location is "US".
    • --version: Dataproc Serverless runtime version 2.2 or later.
    • --properties Catalog properties.
  3. View table metadata in BigQuery

    1. In the Google Cloud console, go to the BigQuery page.

      Go to BigQuery Studio

    2. View Iceberg table metadata.

PySpark

The following steps show you how to run a Dataproc Serverless PySpark batch workload to create an Iceberg table with table metadata stored in BigQuery Metastore.

  1. Copy the following PySpark code locally or in Cloud Shell into an iceberg-table.py file.
    catalog = "CATALOG_NAME"
    namespace = "NAMESPACE"
    
    spark.sql(f"USE `{catalog}`;")
    spark.sql(f"CREATE NAMESPACE IF NOT EXISTS `{namespace}`;")
    spark.sql(f"USE `{namespace}`;")
    
    # Create table and display schema
    spark.sql("DROP TABLE IF EXISTS example_iceberg_table")
    spark.sql("CREATE TABLE example_iceberg_table (id int, data string) USING ICEBERG") 
    spark.sql("DESCRIBE example_iceberg_table;")
    
    # Insert table data.
    spark.sql("INSERT INTO example_iceberg_table VALUES (1, 'first row');")
    
    # Alter table, then display schema. 
    spark.sql("ALTER TABLE example_iceberg_table ADD COLUMNS (newDoubleCol double);")
    spark.sql("DESCRIBE example_iceberg_table;")

    Replace the following:

    • CATALOG_NAME and NAMESPACE: The Iceberg catalog name and namespace combine to identify the Iceberg table (catalog.namespace.table_name).
  2. Run the following command locally or in Cloud Shell from the directory containing iceberg-table.py to submit the PySpark workload.
    gcloud dataproc batches submit pyspark iceberg-table.py \
        --project=PROJECT_ID \
        --region=REGION \
        --deps-bucket=BUCKET_NAME \
        --version=2.2 \
        --subnet=projects/PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME \
        --properties="spark.sql.catalog.CATALOG_NAME=org.apache.iceberg.spark.SparkCatalog,spark.sql.catalog.CATALOG_NAME.catalog-impl=org.apache.iceberg.gcp.bigquery.BigQueryMetastoreCatalog,spark.sql.catalog.CATALOG_NAME.gcp_project=PROJECT_ID,spark.sql.catalog.CATALOG_NAME.gcp_location=LOCATION,spark.sql.catalog.CATALOG_NAME.warehouse=gs://BUCKET/WAREHOUSE_FOLDER"
        

    Notes:

    • PROJECT_ID: Your Google Cloud project ID. Project IDs are listed in the Project info section on the Google Cloud console Dashboard.
    • REGION: An available Compute Engine region to run the workload.
    • BUCKET_NAME: The name of your Cloud Storage bucket. Spark uploads workload dependencies to a /dependencies folder in this bucket before running the batch workload.
    • SUBNET_NAME: The name of a VPC subnet in the REGION that has Private Google Access enabled and meets other session subnet requirements.
    • --version: Dataproc Serverless runtime version 2.2 or later.
    • LOCATION: A supported BigQuery location. The default location is "US".
    • BUCKET and WAREHOUSE_FOLDER: Cloud Storage bucket and folder used as the Iceberg warehouse directory.
    • --properties: Catalog properties.
  3. View table schema in BigQuery.
    1. In the Google Cloud console, go to the BigQuery page. Go to BigQuery Studio
    2. View Iceberg table metadata.