외부 데이터 소스를 기반으로 하는 테이블은 _FILE_NAME이라는 유사 열을 제공합니다. 이 열에는 행이 속한 파일의 정규화된 경로가 있습니다. Cloud Storage, Google 드라이브, Amazon S3, Azure Blob Storage에 저장된 외부 데이터를 참조하는 테이블에만 이 열을 사용할 수 있습니다.
_FILE_NAME 열 이름은 예약되어 있으므로, 어떤 테이블에도 이 이름으로 열을 만들 수 없습니다. _FILE_NAME 값을 선택하려면 별칭을 사용해야 합니다. 다음 예시 쿼리에서는 유사 열에 별칭 fn을 할당하여 _FILE_NAME을 선택하는 방법을 보여줍니다.
bqquery\--project_id=PROJECT_ID\--use_legacy_sql=false\'SELECT name, _FILE_NAME AS fn FROM`DATASET.TABLE_NAME` WHERE name contains "Alex"'
다음을 바꿉니다.
PROJECT_ID는 유효한 프로젝트 ID입니다(Cloud Shell을 사용하거나 Google Cloud CLI에서 기본 프로젝트를 설정하는 경우에는 이 플래그가 필요 없음).
DATASET는 영구 외부 테이블이 저장되는 데이터 세트 이름입니다.
TABLE_NAME은 영구 외부 테이블 이름입니다.
쿼리의 _FILE_NAME 유사 열에 필터 조건자가 있는 경우 BigQuery는 필터를 충족하지 않는 파일 읽기를 건너뛰려고 시도합니다. _FILE_NAME 유사 열로 쿼리 조건자를 구성할 때 유사 열을 사용하여 수집-시간으로 파티션을 나눈 테이블 쿼리와 유사한 권장사항이 적용됩니다.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-09-04(UTC)"],[[["\u003cp\u003eThis document guides you on how to query data in an Azure Blob Storage BigLake table using GoogleSQL syntax, similar to querying a standard BigQuery table.\u003c/p\u003e\n"],["\u003cp\u003eTo query Blob Storage BigLake tables, you need specific roles, including BigQuery Connection User, BigQuery Data Viewer, and BigQuery User, which can be assigned to your account or a Blob Storage connection service account.\u003c/p\u003e\n"],["\u003cp\u003eBigQuery stores cached query results in temporary tables, and you can access these temporary tables via the Google Cloud console or the BigQuery API.\u003c/p\u003e\n"],["\u003cp\u003eExternal data sources provide a \u003ccode\u003e_FILE_NAME\u003c/code\u003e pseudocolumn, which reveals the full path to the file containing each row, allowing for filtering based on file location when using external data.\u003c/p\u003e\n"],["\u003cp\u003eWhen creating a reservation in a BigQuery Omni region, you should use the Enterprise edition.\u003c/p\u003e\n"]]],[],null,["# Query Blob Storage data\n=======================\n\nThis document describes how to query data stored in an\n[Azure Blob Storage BigLake table](/bigquery/docs/omni-azure-create-external-table).\n\nBefore you begin\n----------------\n\nEnsure that you have a [Blob Storage BigLake table](/bigquery/docs/omni-azure-create-external-table).\n\n### Required roles\n\nTo query Blob Storage BigLake tables, ensure\nthat the caller of the BigQuery API has the following roles:\n\n- BigQuery Connection User (`roles/bigquery.connectionUser`)\n- BigQuery Data Viewer (`roles/bigquery.dataViewer`)\n- BigQuery User (`roles/bigquery.user`)\n\nThe caller can be your account or an\n[Blob Storage connection service account](/bigquery/docs/omni-azure-create-connection#create_an_azure_connection).\nDepending on your permissions, you can\ngrant these roles to yourself or ask your administrator\nto grant them to you. For more information about granting roles, see\n[Viewing the grantable roles on resources](/iam/docs/viewing-grantable-roles).\n\nTo see the exact permissions that are required to query\nBlob Storage BigLake tables, expand the\n**Required permissions** section: \n\n#### Required permissions\n\n- `bigquery.connections.use`\n- `bigquery.jobs.create`\n- `bigquery.readsessions.create` (Only required if you are [reading data with the\n BigQuery Storage Read API](/bigquery/docs/reference/storage))\n- `bigquery.tables.get`\n- `bigquery.tables.getData`\n\nYou might also be able to get these permissions with [custom roles](/iam/docs/creating-custom-roles)\nor other [predefined roles](/iam/docs/understanding-roles).\n\nQuery Blob Storage BigLake tables\n---------------------------------\n\nAfter creating a Blob Storage BigLake table, you can [query it using\nGoogleSQL syntax](/bigquery/docs/running-queries), the same as if\nit were a standard BigQuery table.\n\nThe [cached query results](/bigquery/docs/cached-results)\nare stored in a BigQuery temporary table. To query a temporary\nBigLake table, see\n[Query a temporary BigLake table](#query-temp-biglake-table).\nFor more information about BigQuery Omni limitations and quotas, see\n[limitations](/bigquery/docs/omni-introduction#limitations)\nand [quotas](/bigquery/docs/omni-introduction#quotas_and_limits).\n\nWhen creating a reservation in a BigQuery Omni region, use the\nEnterprise edition. To learn how to create a reservation with an edition, see\n[Create reservations](/bigquery/docs/reservations-tasks#create_reservations).\n\nRun a query on the Blob Storage BigLake table:\n\n1. In the Google Cloud console, go to the **BigQuery** page.\n\n [Go to BigQuery](https://console.cloud.google.com/bigquery)\n2. In the query editor, enter the following statement:\n\n ```googlesql\n SELECT * FROM DATASET_NAME.TABLE_NAME;\n ```\n\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eDATASET_NAME\u003c/var\u003e: the dataset name that you created\n - \u003cvar translate=\"no\"\u003eTABLE_NAME\u003c/var\u003e: the BigLake table that name you\n created\n\n - Click play_circle **Run**.\n\n \u003cbr /\u003e\n\nFor more information about how to run queries, see [Run an interactive query](/bigquery/docs/running-queries#queries).\n\nQuery a temporary table\n-----------------------\n\nBigQuery creates temporary tables to store query results.\nTo retrieve query result from temporary tables, you can use the Google Cloud console\nor the [BigQuery API](/bigquery/docs/reliability-read#read_with_api).\n\nSelect one of the following options: \n\n### Console\n\nWhen you [query a BigLake table](#query-biglake-table) that\nreferences external cloud data, you can view the query results displayed\nin the Google Cloud console.\n\n### API\n\nTo query a BigLake table using the API, follow these steps:\n\n1. Create a [Job object](/bigquery/docs/reference/rest/v2/Job).\n2. Call the [`jobs.insert` method](/bigquery/docs/reference/v2/jobs/insert) to run the query asynchronously or the [`jobs.query` method](/bigquery/docs/reference/rest/v2/jobs/query) to run the query synchronously, passing in the `Job` object.\n3. Read rows with the [`jobs.getQueryResults`](/bigquery/docs/reference/rest/v2/jobs/getQueryResults) by passing the given job reference, and the [`tabledata.list`](/bigquery/docs/reference/rest/v2/tabledata/list) methods by passing the given table reference of the query result.\n\nQuery the `_FILE_NAME` pseudocolumn\n-----------------------------------\n\n\nTables based on external data sources provide a pseudocolumn named `_FILE_NAME`. This\ncolumn contains the fully qualified path to the file to which the row belongs. This column is\navailable only for tables that reference external data stored in\n**Cloud Storage** , **Google Drive** ,\n**Amazon S3** , and **Azure Blob Storage**.\n\n\nThe `_FILE_NAME` column name is reserved, which means that you cannot create a column\nby that name in any of your tables. To select the value of `_FILE_NAME`, you must use\nan alias. The following example query demonstrates selecting `_FILE_NAME` by assigning\nthe alias `fn` to the pseudocolumn. \n\n``````bash\n bq query \\\n --project_id=PROJECT_ID \\\n --use_legacy_sql=false \\\n 'SELECT\n name,\n _FILE_NAME AS fn\n FROM\n `````\u003cvar translate=\"no\"\u003eDATASET\u003c/var\u003e``.``\u003cvar translate=\"no\"\u003eTABLE_NAME\u003c/var\u003e`````\n WHERE\n name contains \"Alex\"' \n``````\n\nReplace the following:\n\n- \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e is a valid project ID (this flag is not required if you use Cloud Shell or if you set a default project in the Google Cloud CLI)\n- \u003cvar translate=\"no\"\u003eDATASET\u003c/var\u003e` `is the name of the dataset that stores the permanent external table\n- \u003cvar translate=\"no\"\u003eTABLE_NAME\u003c/var\u003e is the name of the permanent external table\n\n\nWhen the query has a filter predicate on the `_FILE_NAME` pseudocolumn,\nBigQuery attempts to skip reading files that do not satisfy the filter. Similar\nrecommendations to\n[querying ingestion-time partitioned tables using pseudocolumns](/bigquery/docs/querying-partitioned-tables#query_an_ingestion-time_partitioned_table)\napply when constructing query predicates with the `_FILE_NAME` pseudocolumn.\n\nWhat's next\n-----------\n\n- Learn about [using SQL in BigQuery](/bigquery/docs/introduction-sql).\n- Learn about [BigQuery Omni](/bigquery/docs/omni-introduction).\n- Learn about [BigQuery quotas](/bigquery/quotas)."]]