Creating and using clustered tables

This document describes how to create and use clustered tables in BigQuery. For an overview of clustered table support in BigQuery, see Introduction to clustered tables.

Limitations

Clustered tables in BigQuery are subject to the following limitations:

  • Only standard SQL is supported for querying clustered tables and for writing query results to clustered tables.
  • Clustering columns must be top-level, non-repeated columns of one of the following types:

    • DATE
    • BOOL
    • GEOGRAPHY
    • INT64
    • NUMERIC
    • STRING
    • TIMESTAMP

    For more information about data types, see Standard SQL data types.

  • You can specify up to four clustering columns.

  • When using STRING type columns for clustering, BigQuery uses only the first 1,024 characters to cluster the data. The values in the columns can themselves be longer than 1,024.

Creating clustered tables

You can create a clustered table in the following ways:

Table naming

When you create a table in BigQuery, the table name must be unique per dataset. The table name can:

  • Contain up to 1,024 characters.
  • Contain Unicode characters in category L (letter), M (mark), N (number), Pc (connector, including underscore), Pd (dash), Zs (space). For more information, see General Category.

For example, the following are all valid table names: table-01, ग्राहक, 00_お客様, étudiant.

Required permissions

At a minimum, to create a table, you must be granted the following permissions:

  • bigquery.tables.create permissions to create the table
  • bigquery.tables.updateData to write data to the table by using a load job, a query job, or a copy job
  • bigquery.jobs.create to run a query job, load job, or copy job that writes data to the table

Additional permissions such as bigquery.tables.getData might be required to access the data you're writing to the table.

The following predefined IAM roles include both bigquery.tables.create and bigquery.tables.updateData permissions:

  • bigquery.dataEditor
  • bigquery.dataOwner
  • bigquery.admin

The following predefined IAM roles include bigquery.jobs.create permissions:

  • bigquery.user
  • bigquery.jobUser
  • bigquery.admin

In addition, if a user has bigquery.datasets.create permissions, when that user creates a dataset, they are granted bigquery.dataOwner access to it. bigquery.dataOwner access gives the user the ability to create and update tables in the dataset.

For more information on IAM roles and permissions in BigQuery, see Predefined roles and permissions.

Creating an empty clustered table with a schema definition

You specify clustering columns when you create a table in BigQuery. After the table is created, you can modify the clustering columns; see Modifying clustering specification for details.

Clustering columns must be top-level, non-repeated columns, and they must be one of the following simple data types:

  • DATE
  • BOOLEAN
  • GEOGRAPHY
  • INTEGER
  • NUMERIC
  • STRING
  • TIMESTAMP

You can specify up to four clustering columns. When you specify multiple columns, the order of the columns determines how the data is sorted. For example, if the table is clustered by columns a, b and c, the data is sorted in the same order: first by column a, then by column b, and then by column c. As a best practice, place the most frequently filtered or aggregated column first.

The order of your clustering columns also affects query performance and pricing. For more information about query best practices for clustered tables, see Querying clustered tables.

To create an empty clustered table with a schema definition:

Console

  1. In the Google Cloud Console, go to the BigQuery page.

    Go to the BigQuery page

  2. In the navigation panel, in the Resources section, expand your project and select a dataset.

  3. On the right side of the window, in the details panel, click Create table.

    Create table button.

  4. On the Create table page, under Source, for Create table from, select Empty table.

    Create table from option.

  5. Under Destination:

    • For Dataset name, choose the appropriate dataset, and in the Table name field, enter the name of the table you're creating.
    • Verify that Table type is set to Native table.
  6. Under Schema, enter the schema definition.

    • Enter schema information manually by:

      • Enabling Edit as text and entering the table schema as a JSON array.

      • Using Add field to manually input the schema.

  7. (Optional) Under Partition and cluster settings, select Partition by field and choose the DATE or TIMESTAMP column. This option is not available if the schema does not contain a DATE or TIMESTAMP column.

    To create an ingestion-time partitioned table, select Partition by ingestion time.

  8. (Optional) For Partitioning filter, click the Require partition filter checkbox to require users to include a WHERE clause that specifies the partitions to query. Requiring a partition filter can reduce cost and improve performance. For more information, see Querying partitioned tables.

  9. For Clustering order, enter between one and four comma-separated column names.

  10. (Optional) Click Advanced options and for Encryption, click Customer-managed key to use a Cloud Key Management Service key. If you leave the Google-managed key setting, BigQuery encrypts the data at rest.

  11. Click Create table.

bq

Use the bq mk command with the following flags:

  • --table (or the -t shortcut).
  • --schema. You can supply the table's schema definition inline or use a JSON schema file.
  • --clustering_fields. You can specify up to four clustering columns.

Optional parameters include --expiration, --description, --time_partitioning_type, --time_partitioning_field, --time_partitioning_expiration, --destination_kms_key, and --label.

If you are creating a table in a project other than your default project, add the project ID to the dataset in the following format: project_id:dataset.

--destination_kms_key is not demonstrated here. For information about using --destination_kms_key, see customer-managed encryption keys.

Enter the following command to create an empty clustered table with a schema definition:

bq mk \
--table \
--expiration INTEGER1 \
--schema SCHEMA \
--time_partitioning_type=DAY \
--time_partitioning_field PARTITION_COLUMN \
--clustering_fields CLUSTER_COLUMNS \
--time_partitioning_expiration INTEGER2 \
--description "DESCRIPTION" \
--label KEY:VALUE,KEY:VALUE \
PROJECT_ID:DATASET.TABLE

Replace the following:

  • INTEGER1: the default lifetime, in seconds, for the table. The minimum value is 3,600 seconds (one hour). The expiration time evaluates to the current UTC time plus the integer value. If you set the table's expiration time when you create a time-partitioned table, the dataset's default table expiration setting is ignored. Setting this value deletes the table and all partitions after the specified time.
  • SCHEMA: an inline schema definition in the format COLUMN:DATA_TYPE,COLUMN:DATA_TYPE or the path to the JSON schema file on your local machine.
  • PARTITION_COLUMN: the name of the TIMESTAMP or DATE column used to create a partitioned table. If you create a partitioned table, you do not need to specify the --time_partitioning_type=DAY flag.
  • CLUSTER_COLUMNS: a comma-separated list of up to four clustering columns. The list cannot contain any spaces.
  • INTEGER2: the default lifetime, in seconds, for the table's partitions. There is no minimum value. The expiration time evaluates to the partition's date plus the integer value. The partition expiration is independent of the table's expiration but does not override it. If you set a partition expiration that is longer than the table's expiration, the table expiration takes precedence.
  • DESCRIPTION: a description of the table, in quotes.
  • KEY:VALUE: the key-value pair that represents a label. You can enter multiple labels using a comma-separated list.
  • PROJECT_ID: your project ID.
  • DATASET: a dataset in your project.
  • TABLE: the name of the partitioned table you're creating.

When you specify the schema on the command line, you cannot include a RECORD (STRUCT) type, you cannot include a column description, and you cannot specify the column's mode. All modes default to NULLABLE. To include descriptions, modes, and RECORD types, supply a JSON schema file instead.

Examples:

Enter the following command to create a clustered table named myclusteredtable in mydataset in your default project. The table is a partitioned table (partitioned by a TIMESTAMP column). The partitioning expiration is set to 86,400 seconds (1 day), the table's expiration is set to 2,592,000 (1 30-day month), the description is set to This is my clustered table, and the label is set to organization:development. The command uses the -t shortcut instead of --table.

The schema is specified inline as: timestamp:timestamp,customer_id:string,transaction_amount:float. The specified clustering field customer_id is used to cluster the partitions.

bq mk -t \
--expiration 2592000 \
--schema 'timestamp:timestamp,customer_id:string,transaction_amount:float' \
--time_partitioning_field timestamp \
--clustering_fields customer_id \
--time_partitioning_expiration 86400  \
--description "This is my clustered table" \
--label org:dev \
mydataset.myclusteredtable

Enter the following command to create a clustered table named myclusteredtable in myotherproject, not your default project. The table is an ingestion-time partitioned table. The partitioning expiration is set to 259,200 seconds (3 days), the description is set to This is my partitioned table, and the label is set to organization:development. The command uses the -t shortcut instead of --table. This command does not specify a table expiration. If the dataset has a default table expiration, it is applied. If the dataset has no default table expiration, the table never expires, but the partitions expire in 3 days.

The schema is specified in a local JSON file: /tmp/myschema.json. The customer_id field is used to cluster the partitions.

bq mk -t \
--expiration 2592000 \
--schema /tmp/myschema.json \
--time_partitioning_type=DAY \
--clustering_fields=customer_id \
--time_partitioning_expiration 86400  \
--description "This is my partitioned table" \
--label org:dev \
myotherproject:mydataset.myclusteredtable

After the table is created, you can update the partitioned table's table expiration, partition expiration, description, and labels.

API

Call the tables.insert method with a defined table resource that specifies the timePartitioning property, the clustering.fields property, and the schema property.

Python

Before trying this sample, follow the Python setup instructions in the BigQuery Quickstart Using Client Libraries. For more information, see the BigQuery Python API reference documentation.

from google.cloud import bigquery

# Construct a BigQuery client object.
client = bigquery.Client()

# TODO(developer): Set table_id to the ID of the table to create.
# table_id = "your-project.your_dataset.your_table_name"

schema = [
    bigquery.SchemaField("full_name", "STRING"),
    bigquery.SchemaField("city", "STRING"),
    bigquery.SchemaField("zipcode", "INTEGER"),
]

table = bigquery.Table(table_id, schema=schema)
table.clustering_fields = ["city", "zipcode"]
table = client.create_table(table)  # Make an API request.
print(
    "Created clustered table {}.{}.{}".format(
        table.project, table.dataset_id, table.table_id
    )
)

Go

Before trying this sample, follow the Go setup instructions in the BigQuery Quickstart Using Client Libraries. For more information, see the BigQuery Go API reference documentation.

import (
	"context"
	"fmt"
	"time"

	"cloud.google.com/go/bigquery"
)

// createTableClustered demonstrates creating a BigQuery table with advanced properties like
// partitioning and clustering features.
func createTableClustered(projectID, datasetID, tableID string) error {
	// projectID := "my-project-id"
	// datasetID := "mydatasetid"
	// tableID := "mytableid"
	ctx := context.Background()

	client, err := bigquery.NewClient(ctx, projectID)
	if err != nil {
		return fmt.Errorf("bigquery.NewClient: %v", err)
	}
	defer client.Close()

	sampleSchema := bigquery.Schema{
		{Name: "timestamp", Type: bigquery.TimestampFieldType},
		{Name: "origin", Type: bigquery.StringFieldType},
		{Name: "destination", Type: bigquery.StringFieldType},
		{Name: "amount", Type: bigquery.NumericFieldType},
	}
	metaData := &bigquery.TableMetadata{
		Schema: sampleSchema,
		TimePartitioning: &bigquery.TimePartitioning{
			Field:      "timestamp",
			Expiration: 90 * 24 * time.Hour,
		},
		Clustering: &bigquery.Clustering{
			Fields: []string{"origin", "destination"},
		},
	}
	tableRef := client.Dataset(datasetID).Table(tableID)
	if err := tableRef.Create(ctx, metaData); err != nil {
		return err
	}
	return nil
}

Java

Before trying this sample, follow the Java setup instructions in the BigQuery Quickstart Using Client Libraries. For more information, see the BigQuery Java API reference documentation.

import com.google.cloud.bigquery.BigQuery;
import com.google.cloud.bigquery.BigQueryException;
import com.google.cloud.bigquery.BigQueryOptions;
import com.google.cloud.bigquery.Clustering;
import com.google.cloud.bigquery.Field;
import com.google.cloud.bigquery.Schema;
import com.google.cloud.bigquery.StandardSQLTypeName;
import com.google.cloud.bigquery.StandardTableDefinition;
import com.google.cloud.bigquery.TableId;
import com.google.cloud.bigquery.TableInfo;
import com.google.cloud.bigquery.TimePartitioning;
import com.google.common.collect.ImmutableList;

public class CreateClusteredTable {
  public static void runCreateClusteredTable() {
    // TODO(developer): Replace these variables before running the sample.
    String datasetName = "MY_DATASET_NAME";
    String tableName = "MY_TABLE_NAME";
    createClusteredTable(datasetName, tableName);
  }

  public static void createClusteredTable(String datasetName, String tableName) {
    try {
      // Initialize client that will be used to send requests. This client only needs to be created
      // once, and can be reused for multiple requests.
      BigQuery bigquery = BigQueryOptions.getDefaultInstance().getService();

      TableId tableId = TableId.of(datasetName, tableName);

      TimePartitioning partitioning = TimePartitioning.of(TimePartitioning.Type.DAY);

      Schema schema =
          Schema.of(
              Field.of("name", StandardSQLTypeName.STRING),
              Field.of("post_abbr", StandardSQLTypeName.STRING),
              Field.of("date", StandardSQLTypeName.DATE));

      Clustering clustering =
          Clustering.newBuilder().setFields(ImmutableList.of("name", "post_abbr")).build();

      StandardTableDefinition tableDefinition =
          StandardTableDefinition.newBuilder()
              .setSchema(schema)
              .setTimePartitioning(partitioning)
              .setClustering(clustering)
              .build();
      TableInfo tableInfo = TableInfo.newBuilder(tableId, tableDefinition).build();

      bigquery.create(tableInfo);
      System.out.println("Clustered table created successfully");
    } catch (BigQueryException e) {
      System.out.println("Clustered table was not created. \n" + e.toString());
    }
  }
}

Creating a clustered table from a query result

There are two ways to create a clustered table from a query result:

You can create a clustered table by querying either a partitioned table or a non-partitioned table. You cannot change an existing table to a clustered table by using query results.

When you create a clustered table from a query result, you must use standard SQL. Currently, legacy SQL is not supported for querying clustered tables or for writing query results to clustered tables.

Console

You cannot specify clustering options for a destination table when you query data using the Cloud Console unless you use a DDL statement. For more information, see Using data definition language statements.

bq

Enter the following command to create a new, clustered destination table from a query result:

bq --location=LOCATION query \
--use_legacy_sql=false 'QUERY'

Replace the following:

  • LOCATION: the name of your location. The --location flag is optional. For example, if you are using BigQuery in the Tokyo region, you can set the flag's value to asia-northeast1. You can set a default value for the location using the .bigqueryrc file.
  • QUERY: a query in standard SQL syntax. Currently, you cannot use legacy SQL to query clustered tables or to write query results to clustered tables. The query can contain a CREATE TABLE DDL statement that specifies the options for creating your clustered table. You can use DDL rather than specifying the individual command-line flags.

Examples:

Enter the following command to write query results to a clustered destination table named myclusteredtable in mydataset. mydataset is in your default project. The query retrieves data from a non-partitioned table: mytable. The table's customer_id column is used to cluster the table. The table's timestamp column is used to create a partitioned table.

bq query --use_legacy_sql=false \
'CREATE TABLE
   mydataset.myclusteredtable
 PARTITION BY
   DATE(timestamp)
 CLUSTER BY
   customer_id AS
 SELECT
   *
 FROM
   `mydataset.mytable`'

API

To save query results to a clustered table, call the jobs.insert method, configure a query job, and include a CREATE TABLE DDL statement that creates your clustered table.

Specify your location in the location property in the jobReference section of the job resource.

Creating a clustered table when you load data

You can create a clustered table by specifying clustering columns when you load data into a new table. You do not need to create an empty table before loading data into it. You can create the clustered table and load your data at the same time.

For more information about loading data, see Introduction to loading data into BigQuery.

To define clustering when defining a load job:

API

To define a clustering configuration when creating a table through a load job, you can populate the Clustering properties for the table.

Go

Before trying this sample, follow the Go setup instructions in the BigQuery Quickstart Using Client Libraries. For more information, see the BigQuery Go API reference documentation.

import (
	"context"
	"fmt"

	"cloud.google.com/go/bigquery"
)

// importClusteredTable demonstrates creating a table from a load job and defining partitioning and clustering
// properties.
func importClusteredTable(projectID, destDatasetID, destTableID string) error {
	// projectID := "my-project-id"
	// datasetID := "mydataset"
	// tableID := "mytable"
	ctx := context.Background()
	client, err := bigquery.NewClient(ctx, projectID)
	if err != nil {
		return fmt.Errorf("bigquery.NewClient: %v", err)
	}
	defer client.Close()

	gcsRef := bigquery.NewGCSReference("gs://cloud-samples-data/bigquery/sample-transactions/transactions.csv")
	gcsRef.SkipLeadingRows = 1
	gcsRef.Schema = bigquery.Schema{
		{Name: "timestamp", Type: bigquery.TimestampFieldType},
		{Name: "origin", Type: bigquery.StringFieldType},
		{Name: "destination", Type: bigquery.StringFieldType},
		{Name: "amount", Type: bigquery.NumericFieldType},
	}
	loader := client.Dataset(destDatasetID).Table(destTableID).LoaderFrom(gcsRef)
	loader.TimePartitioning = &bigquery.TimePartitioning{
		Field: "timestamp",
	}
	loader.Clustering = &bigquery.Clustering{
		Fields: []string{"origin", "destination"},
	}
	loader.WriteDisposition = bigquery.WriteEmpty

	job, err := loader.Run(ctx)
	if err != nil {
		return err
	}
	status, err := job.Wait(ctx)
	if err != nil {
		return err
	}

	if status.Err() != nil {
		return fmt.Errorf("job completed with error: %v", status.Err())
	}
	return nil
}

Java

Before trying this sample, follow the Java setup instructions in the BigQuery Quickstart Using Client Libraries. For more information, see the BigQuery Java API reference documentation.

import com.google.cloud.bigquery.BigQuery;
import com.google.cloud.bigquery.BigQueryException;
import com.google.cloud.bigquery.BigQueryOptions;
import com.google.cloud.bigquery.Clustering;
import com.google.cloud.bigquery.Field;
import com.google.cloud.bigquery.FormatOptions;
import com.google.cloud.bigquery.Job;
import com.google.cloud.bigquery.JobInfo;
import com.google.cloud.bigquery.LoadJobConfiguration;
import com.google.cloud.bigquery.Schema;
import com.google.cloud.bigquery.StandardSQLTypeName;
import com.google.cloud.bigquery.TableId;
import com.google.cloud.bigquery.TimePartitioning;
import com.google.common.collect.ImmutableList;

public class LoadTableClustered {

  public static void runLoadTableClustered() throws Exception {
    // TODO(developer): Replace these variables before running the sample.
    String datasetName = "MY_DATASET_NAME";
    String tableName = "MY_TABLE_NAME";
    String sourceUri = "/path/to/file.csv";
    loadTableClustered(datasetName, tableName, sourceUri);
  }

  public static void loadTableClustered(String datasetName, String tableName, String sourceUri)
      throws Exception {
    try {
      // Initialize client that will be used to send requests. This client only needs to be created
      // once, and can be reused for multiple requests.
      BigQuery bigquery = BigQueryOptions.getDefaultInstance().getService();

      TableId tableId = TableId.of(datasetName, tableName);

      Schema schema =
          Schema.of(
              Field.of("name", StandardSQLTypeName.STRING),
              Field.of("post_abbr", StandardSQLTypeName.STRING),
              Field.of("date", StandardSQLTypeName.DATE));

      TimePartitioning partitioning = TimePartitioning.of(TimePartitioning.Type.DAY);

      Clustering clustering =
          Clustering.newBuilder().setFields(ImmutableList.of("name", "post_abbr")).build();

      LoadJobConfiguration loadJobConfig =
          LoadJobConfiguration.builder(tableId, sourceUri)
              .setFormatOptions(FormatOptions.csv())
              .setSchema(schema)
              .setTimePartitioning(partitioning)
              .setClustering(clustering)
              .build();

      Job loadJob = bigquery.create(JobInfo.newBuilder(loadJobConfig).build());

      // Load data from a GCS parquet file into the table
      // Blocks until this load table job completes its execution, either failing or succeeding.
      Job completedJob = loadJob.waitFor();

      // Check for errors
      if (completedJob == null) {
        throw new Exception("Job not executed since it no longer exists.");
      } else if (completedJob.getStatus().getError() != null) {
        // You can also look at queryJob.getStatus().getExecutionErrors() for all
        // errors, not just the latest one.
        throw new Exception(
            "BigQuery was unable to load into the table due to an error: \n"
                + loadJob.getStatus().getError());
      }
      System.out.println("Data successfully loaded into clustered table during load job");
    } catch (BigQueryException | InterruptedException e) {
      System.out.println("Data not loaded into clustered table during load job \n" + e.toString());
    }
  }
}

Controlling access to clustered tables

To configure access to tables and views, you can grant an IAM role to an entity at the following levels, listed in order of range of resources allowed (largest to smallest):

Access with any resource protected by IAM is additive. For example, if an entity does not have access at the high level such as a project, you could grant the entity access at the dataset level, and then the entity will have access to the tables and views in the dataset. Similarly, if the entity does not have access at the high level or the dataset level, you could grant the entity access at the table of view level.

Granting IAM roles at a higher level in the Google Cloud resource hierarchy such as the project, folder, or organization level gives the entity access to a broad set of resources. For example, granting a role to an entity at the project level gives that entity permissions that apply to all datasets throughout the project.

Granting a role at the dataset level specifies the operations an entity is allowed to perform on tables and views in that specific dataset, even if the entity does not have access at a higher level. For information on configuring dataset-level access controls, see Controlling access to datasets.

Granting a role at the table or view level specifies the operations an entity is allowed to perform on specific tables and views, even if the entity does not have access at a higher level. For information on configuring table-level access controls, see Controlling access to tables and views.

You can also create IAM custom roles. If you create a custom role, the permissions you grant depend on the specific operations you want the entity to be able to perform.

You can't set a "deny" permission on any resource protected by IAM.

For more information on roles and permissions, see:

Using clustered tables

Getting information about clustered tables

You can get information about tables in the following ways:

  • Using the Cloud Console.
  • Using the bq command-line tool's bq show command.
  • Calling the tables.get API method.
  • Querying INFORMATION_SCHEMA views.

Required permissions

At a minimum, to get information about tables, you must be granted bigquery.tables.get permissions. The following predefined IAM roles include bigquery.tables.get permissions:

  • bigquery.metadataViewer
  • bigquery.dataViewer
  • bigquery.dataOwner
  • bigquery.dataEditor
  • bigquery.admin

In addition, if a user has bigquery.datasets.create permissions, when that user creates a dataset, they are granted bigquery.dataOwner access to it. bigquery.dataOwner access gives the user the ability to get information about tables in a dataset.

For more information about IAM roles and permissions in BigQuery, see Predefined roles and permissions.

Getting clustered table information

To view information about a clustered table:

Console

  1. In the Google Cloud Console, go to the Resources pane. Click your dataset name to expand it, and then click the table name you want to view.

  2. Click Details. This page displays the table's details including the clustering columns.

    Table details.

bq

Issue the bq show command to display all table information. Use the --schema flag to display only table schema information. The --format flag can be used to control the output.

If you are getting information about a table in a project other than your default project, add the project ID to the dataset in the following format: project_id:dataset.

bq show \
--schema \
--format=prettyjson \
PROJECT_ID:DATASET.TABLE

Replace the following:

  • PROJECT_ID: your project ID
  • DATASET: the name of the dataset
  • TABLE: the name of the table

Examples:

Enter the following command to display all information about myclusteredtable in mydataset. mydataset in your default project.

bq show --format=prettyjson mydataset.myclusteredtable

The output should look like the following:

{
  "clustering": {
    "fields": [
      "customer_id"
    ]
  },
...
}

API

Call the bigquery.tables.get method and provide any relevant parameters.

SQL

For clustered tables, you can query the CLUSTERING_ORDINAL_POSITION column in the INFORMATION_SCHEMA.COLUMNS view to retrieve information about a table's clustering columns.

-- Set up a table with clustering.
CREATE TABLE myDataset.data (column1 INT64, column2 INT64)
PARTITION BY _PARTITIONDATE
CLUSTER BY column1, column2;

-- This query returns 1 for column1 and 2 for column2.
SELECT column_name, clustering_ordinal_position
FROM myDataset.INFORMATION_SCHEMA.COLUMNS;

More table metadata is available through the TABLES, TABLE_OPTIONS, COLUMNS, and COLUMN_FIELD_PATH views in INFORMATION_SCHEMA.

Listing clustered tables in a dataset

You can list clustered tables in datasets in the following ways:

  • Using the Cloud Console.
  • Using the bq command-line tool's bq ls command.
  • Calling the tables.list API method.
  • Using the client libraries.
  • Querying the CLUSTERING_ORDINAL_POSITION column in the INFORMATION_SCHEMA.COLUMNS view.

The permissions required to list clustered tables and the steps to list them are the same as for partitioned tables. For more information about listing tables, see Listing partitioned tables in a dataset.

Modifying clustering specification

By calling the tables.update or tables.patch methods, table clustering specifications can be changed or removed. The set of clustered columns in a clustered table can also be changed to a different set of columns. This method of updating the clustering column set is most useful for tables with continuous streaming insertions because those tables cannot easily be swapped otherwise.

When a table is converted from non-clustered to clustered or the clustered column set is changed, automatic re-clustering only works from that time onward. For example, a non-clustered 1 PB table that is converted to a clustered table using tables.update still has 1 PB of non-clustered data. Automatic re-clustering only applies to any new data committed to the table after the update.

Next steps