CUSTOM: パーティション キースキーマはソース URI プレフィックスで指定されたとおりにエンコードされます。
カスタム パーティション キーのスキーマ
CUSTOM スキーマを使用するには、ソース URI プレフィックス フィールドでスキーマを指定する必要があります。CUSTOM スキーマを使用すると、各パーティション キーのタイプを指定できます。値は、指定された型で有効に解析される必要があります。そうでない場合、クエリは失敗します。
たとえば、source_uri_prefix フラグを gs://my_bucket/my_table/{dt:DATE}/{val:STRING} に設定すると、BigQuery は val を STRING として処理し、dt を DATE として処理して、gs://my_bucket/my_table を一致したファイルのソース URI 接頭辞として使用します。
このオプションを有効にした場合、WHERE 句を指定しないで外部パーティション分割テーブルにクエリを実行すると、次のエラーが発生します。Cannot query over table <table_name> without a filter over column(s)
<partition key names> that can be used for partition elimination。
制限事項
Hive パーティショニングのサポートは、すべての URI において、パーティション エンコードの直前に共通のソース URI プレフィックスがあるものと想定して構築されています。例: gs://BUCKET/PATH_TO_TABLE/。
[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["わかりにくい","hardToUnderstand","thumb-down"],["情報またはサンプルコードが不正確","incorrectInformationOrSampleCode","thumb-down"],["必要な情報 / サンプルがない","missingTheInformationSamplesINeed","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-09-04 UTC。"],[[["\u003cp\u003eBigQuery enables querying partitioned data from Cloud Storage, Amazon S3, and Azure Blob Storage using external tables or BigLake tables, with the latter recommended for enhanced security.\u003c/p\u003e\n"],["\u003cp\u003ePartitioned data must adhere to the default Hive partitioning layout, where key-value pairs are encoded as directories with an equal sign separator, and support file formats such as Avro, CSV, JSON, ORC, and Parquet.\u003c/p\u003e\n"],["\u003cp\u003eBigQuery offers three modes for Hive partition schema detection: \u003ccode\u003eAUTO\u003c/code\u003e for automatic key name and type detection, \u003ccode\u003eSTRINGS\u003c/code\u003e for string conversion, and \u003ccode\u003eCUSTOM\u003c/code\u003e for user-specified schemas.\u003c/p\u003e\n"],["\u003cp\u003ePredicate filters on partition keys can be used to enhance query performance by allowing BigQuery to prune partitions and avoid reading unnecessary files.\u003c/p\u003e\n"],["\u003cp\u003eEnabling the \u003ccode\u003erequirePartitionFilter\u003c/code\u003e option ensures that queries on externally partitioned tables include a filter on partition keys, although certain \u003ccode\u003eWHERE\u003c/code\u003e clause requirements need to be respected for partition elimination to occur.\u003c/p\u003e\n"]]],[],null,["# Use externally partitioned data\n===============================\n\nYou can use BigQuery external tables to query partitioned data in\nthe following data stores:\n\n- [Cloud Storage](/bigquery/docs/create-cloud-storage-table-biglake#create-biglake-partitioned-data)\n- [Amazon Simple Storage Service (Amazon S3)](/bigquery/docs/omni-aws-create-external-table#create-biglake-table-partitioned)\n- [Azure Blob Storage](/bigquery/docs/omni-azure-create-external-table#create-biglake-table-partitioned)\n\nThe external partitioned data must use a [default Hive partitioning layout](#supported_data_layouts)\nand be in one of the following formats:\n\n- Avro\n- CSV\n- JSON\n- ORC\n- Parquet\n\nTo query externally partitioned data,\nyou must create a\n[BigLake table](/bigquery/docs/create-cloud-storage-table-biglake)\nor an\n[external table](/bigquery/docs/external-data-cloud-storage#create-external-table-partitioned).\nWe recommend using BigLake tables because they let you enforce\nfine-grained security at the table level.\nFor information about BigLake and external tables, see\n[Introduction to BigLake tables](/bigquery/docs/biglake-intro)\nand [Introduction to external tables](/bigquery/docs/external-tables).\n\nYou enable Hive partitioning support by setting the appropriate\noptions in the [table definition file](/bigquery/docs/external-table-definition#create_a_definition_file_for_hive-partitioned_data).\nFor instructions about querying managed partitioned tables, see\n[Introduction to partitioned tables](/bigquery/docs/partitioned-tables).\n\nPartition schema\n----------------\n\nThe following sections explain the [default Hive partitioned\nlayout](#supported_data_layouts) and the [schema detection modes](#detection_modes) that\nBigQuery supports.\n\nTo avoid reading unnecessary files and to improve performance, you can use\n[predicate filters on partition keys in queries](#partition_pruning).\n\n### Supported data layouts\n\nHive partition keys appear as normal columns when you query data from\nCloud Storage.\nThe data must follow a default Hive partitioned layout.\nFor example, the following files follow the default layout---the\nkey-value pairs are configured as directories with an equal sign (=) as a separator,\nand the partition keys are always in the same order: \n\n gs://my_bucket/my_table/dt=2019-10-31/lang=en/my_filename\n gs://my_bucket/my_table/dt=2018-10-31/lang=fr/my_filename\n\nThe common source URI prefix in this example is `gs://my_bucket/my_table`.\n\n### Unsupported data layouts\n\nIf the partition key names are not encoded in the directory path,\npartition schema detection fails. For example, consider the following path,\nwhich does not encode the partition key names: \n\n gs://my_bucket/my_table/2019-10-31/en/my_filename\n\nFiles where the schema is not in a consistent order also fail detection.\nFor example, consider the following two files with inverted partition\nkey encodings: \n\n gs://my_bucket/my_table/dt=2019-10-31/lang=en/my_filename\n gs://my_bucket/my_table/lang=fr/dt=2018-10-31/my_filename\n\n### Detection modes\n\nBigQuery supports three modes of Hive partition schema detection:\n\n- `AUTO`: Key names and types are automatically detected. The following types\n can be detected:\n\n - [STRING](/bigquery/docs/reference/standard-sql/data-types#string_type)\n - [INTEGER](/bigquery/docs/reference/standard-sql/data-types#integer_types)\n - [DATE](/bigquery/docs/reference/standard-sql/data-types#date_type)\n\n For example, `/date=2018-10-18/`.\n - [TIMESTAMP](/bigquery/docs/reference/standard-sql/data-types#timestamp_type)\n\n For example, `/time=2018-10-18 16:00:00+00/`.\n- `STRINGS`: Key names are automatically converted to `STRING` type.\n\n- `CUSTOM`: Partition key schema is encoded as specified in the source URI\n prefix.\n\n#### Custom partition key schema\n\nTo use a `CUSTOM` schema, you must specify the schema in the source URI prefix\nfield. Using a `CUSTOM` schema lets you specify the type for each partition key.\nThe values must validly parse as the specified type or the query fails.\n\nFor example, if you set the `source_uri_prefix` flag to\n`gs://my_bucket/my_table/{dt:DATE}/{val:STRING}`,\nBigQuery treats `val` as a STRING, `dt` as a DATE, and\nuses `gs://my_bucket/my_table` as the source URI prefix for the matched files.\n\n### Partition pruning\n\nBigQuery prunes partitions when possible using query predicates on\nthe partition keys. This lets BigQuery avoid reading\nunnecessary files, which helps improve performance.\n\n### Predicate filters on partition keys in queries\n\nWhen you create an externally partitioned table, you can require the use of\npredicate filters on partition keys by enabling the `requirePartitionFilter`\noption under\n[HivePartitioningOptions](/bigquery/docs/reference/rest/v2/tables#hivepartitioningoptions).\n\nWhen this option is enabled, attempts to query the externally partitioned table\nwithout specifying a `WHERE` clause produce the following error:\n`Cannot query over table \u003ctable_name\u003e without a filter over column(s)\n\u003cpartition key names\u003e that can be used for partition elimination`.\n| **Note:** There must be at least one predicate that\n| only references one or more partition keys for the filter to be considered\n| eligible for partition elimination. For example, for a table with partition key\n| `val` and column `f` in the file, both of the following\n| `WHERE` clauses satisfy the requirement:\n|\n| \u003cbr /\u003e\n|\n| `WHERE val = \"key\"`\n|\n| `WHERE val = \"key\" AND f = \"column\"`\n|\n| However, `WHERE (val = \"key\" OR f = \"column\")` is not sufficient.\n\nLimitations\n-----------\n\n- Hive partitioning support is built assuming a common source URI prefix for all URIs that ends immediately before partition encoding, as follows: `gs://`\u003cvar translate=\"no\"\u003eBUCKET\u003c/var\u003e`/`\u003cvar translate=\"no\"\u003ePATH_TO_TABLE\u003c/var\u003e`/`.\n- The directory structure of a Hive partitioned table is assumed to have the same partitioning keys appear in the same order, with a maximum of ten partition keys per table.\n- The data must follow a [default Hive partitioning layout](#supported_data_layouts).\n- The Hive partitioning keys and the columns in the underlying files cannot overlap.\n- Support is for\n [GoogleSQL](/bigquery/docs/reference/standard-sql/query-syntax) only.\n\n- All [limitations](/bigquery/external-data-sources#external_data_source_limitations)\n for querying external data sources stored on Cloud Storage apply.\n\nWhat's next\n-----------\n\n- Learn about [partitioned tables](/bigquery/docs/partitioned-tables).\n- Learn how to [use SQL in BigQuery](/bigquery/docs/introduction-sql)."]]