INFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_FOLDER 뷰에는 하위 폴더를 포함하여 현재 프로젝트의 상위 폴더에 대한 분당 집계된 BigQuery Storage Write API 처리 통계가 포함됩니다.
INFORMATION_SCHEMA Write API 뷰를 쿼리하여 BigQuery Storage Write API를 사용하는 BigQuery로의 데이터 수집에 대한 이전 및 실시간 정보를 검색할 수 있습니다. 자세한 내용은 BigQuery Storage Write API를 참조하세요.
필요한 권한
INFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_FOLDER 뷰를 쿼리하려면 프로젝트의 상위 폴더에 대한 bigquery.tables.list Identity and Access Management(IAM) 권한이 필요합니다.
INFORMATION_SCHEMA BigQuery Storage Write API 뷰를 쿼리하면 BigQuery Storage Write API를 사용한 BigQuery로의 데이터 수집에 대한 이전 및 실시간 정보가 쿼리 결과에 포함됩니다. 다음 뷰의 각 행은 start_timestamp에서 시작하는 1분 간격 동안 집계된 특정 테이블로의 수집에 대한 통계를 나타냅니다. 통계는 스트림 유형 및 오류 코드별로 그룹화되므로 스트림 유형마다 행이 하나씩 있고 각 타임스탬프 및 테이블 조합에 대해 1분 간격으로 각 오류 코드가 발생합니다. 요청이 성공하면 오류 코드가 OK로 설정됩니다. 특정 기간 동안 테이블로 수집되는 데이터가 없으면 해당 테이블에는 해당 타임스탬프에 대한 행이 나타나지 않습니다.
INFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_* 뷰에는 다음과 같은 스키마가 있습니다.
열 이름
데이터 유형
값
start_timestamp
TIMESTAMP
(파티션 나누기 열) 집계된 통계에 대한 1분 간격의 시작 타임스탬프입니다.
project_id
STRING
(클러스터링 열) 프로젝트의 ID
project_number
INTEGER
프로젝트의 번호
dataset_id
STRING
(클러스터링 열) 데이터 세트의 ID
table_id
STRING
(클러스터링 열) 테이블의 ID
stream_type
STRING
BigQuery Storage Write API를 사용한 데이터 수집에 사용되는 스트림 유형입니다. 'DEFAULT', 'COMMITTED', 'BUFFERED' 또는 'PENDING' 중 하나여야 합니다.
error_code
STRING
이 행에서 지정한 요청에 대해 반환되는 오류 코드 성공적인 요청에 대해 'OK'입니다.
total_requests
INTEGER
1분 간격 내 총 요청 수
total_rows
INTEGER
1분 간격 동안 모든 요청의 총 행 수입니다.
total_input_bytes
INTEGER
1분 간격 내 모든 행의 총 바이트 수
데이터 보관
이 뷰에는 지난 180일 동안의 BigQuery Storage Write API 처리 내역이 포함됩니다.
범위 및 문법
이 뷰에 대한 쿼리에는 리전 한정자가 있어야 합니다.
리전 한정자를 지정하지 않으면 모든 리전에서 메타데이터가 검색됩니다. 다음 표에는 이 뷰의 리전 범위가 나와 있습니다.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-09-04(UTC)"],[[["\u003cp\u003eThe \u003ccode\u003eINFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_FOLDER\u003c/code\u003e view provides per-minute aggregated statistics on BigQuery Storage Write API data ingestion for a project's parent folder and its subfolders.\u003c/p\u003e\n"],["\u003cp\u003eQuerying this view requires the \u003ccode\u003ebigquery.tables.list\u003c/code\u003e IAM permission, which is included in roles like \u003ccode\u003ebigquery.admin\u003c/code\u003e, \u003ccode\u003ebigquery.user\u003c/code\u003e, and \u003ccode\u003ebigquery.dataViewer\u003c/code\u003e, but not in basic Owner or Editor roles.\u003c/p\u003e\n"],["\u003cp\u003eEach row in the view represents one-minute interval statistics for ingestion into a specific table, grouped by stream type and error code, and successful requests are marked with the error code \u003ccode\u003eOK\u003c/code\u003e.\u003c/p\u003e\n"],["\u003cp\u003eThe view's schema includes columns such as \u003ccode\u003estart_timestamp\u003c/code\u003e, \u003ccode\u003eproject_id\u003c/code\u003e, \u003ccode\u003etable_id\u003c/code\u003e, \u003ccode\u003estream_type\u003c/code\u003e, \u003ccode\u003eerror_code\u003c/code\u003e, \u003ccode\u003etotal_requests\u003c/code\u003e, \u003ccode\u003etotal_rows\u003c/code\u003e, and \u003ccode\u003etotal_input_bytes\u003c/code\u003e to provide detailed ingestion metrics.\u003c/p\u003e\n"],["\u003cp\u003eThe view retains up to 180 days of BigQuery Storage Write API ingestion history, and you must specify a region qualifier in your queries, matching the region of the \u003ccode\u003eINFORMATION_SCHEMA\u003c/code\u003e view.\u003c/p\u003e\n"]]],[],null,["# WRITE_API_TIMELINE_BY_FOLDER view\n=================================\n\nThe `INFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_FOLDER` view contains per minute\naggregated BigQuery Storage Write API ingestion statistics for the parent folder of the current project, including its subfolders.\n\nYou can query the `INFORMATION_SCHEMA` Write API views\nto retrieve historical and real-time information about data ingestion into\nBigQuery that uses the BigQuery Storage Write API. See [BigQuery Storage Write API](/bigquery/docs/write-api) for more information.\n\nRequired permission\n-------------------\n\nTo query the `INFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_FOLDER` view, you need\nthe `bigquery.tables.list` Identity and Access Management (IAM) permission for the parent\nfolder of the project.\n\nEach of the following predefined IAM roles includes the preceding\npermission:\n\n- `roles/bigquery.admin`\n- `roles/bigquery.user`\n- `roles/bigquery.dataViewer`\n- `roles/bigquery.dataEditor`\n- `roles/bigquery.dataOwner`\n- `roles/bigquery.metadataViewer`\n- `roles/bigquery.resourceAdmin`\n\n| **Caution:** The required \\`bigquery.tables.list\\` permission is *not* included in the [basic roles](/bigquery/docs/access-control-basic-roles) Owner or Editor.\n\nFor more information about BigQuery permissions, see\n[Access control with IAM](/bigquery/docs/access-control).\n\nSchema\n------\n\nWhen you query the `INFORMATION_SCHEMA` BigQuery Storage Write API views, the query results contain historical and real-time information about data ingestion into\nBigQuery using the BigQuery Storage Write API. Each row in the following views represents statistics for ingestion into a specific table, aggregated over\na one minute interval starting at `start_timestamp`. Statistics are grouped by stream type and error code, so there will be one row for each stream type and\neach encountered error code during the one minute interval for each timestamp\nand table combination. Successful requests have the error code set to `OK`. If\nno data was ingested into a table during a certain time period, then no rows are present for the corresponding timestamps for that table.\n\nThe `INFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_*` views have the\nfollowing schema:\n\nData retention\n--------------\n\nThis view contains the BigQuery Storage Write API ingestion history of the past 180 days.\n\nScope and syntax\n----------------\n\nQueries against this view must include a [region qualifier](/bigquery/docs/information-schema-intro#syntax).\nIf you do not specify a regional qualifier, metadata is retrieved from all\nregions. The following table explains the region scope for this view:\n\nReplace the following:\n\n- Optional: \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e: the ID of your Google Cloud project. If not specified, the default project is used.\n- \u003cvar translate=\"no\"\u003eREGION\u003c/var\u003e: any [dataset region name](/bigquery/docs/locations). For example, ```region-us```.\n\n \u003cbr /\u003e\n\n \u003cbr /\u003e\n\n | **Note:** You must use [a region qualifier](/bigquery/docs/information-schema-intro#region_qualifier) to query `INFORMATION_SCHEMA` views. The location of the query execution must match the region of the `INFORMATION_SCHEMA` view.\n\n\u003cbr /\u003e\n\n**Example**\n\n- To query data in the US multi-region, use `region-us.INFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_FOLDER`\n- To query data in the EU multi-region, use `region-eu.INFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_FOLDER`\n- To query data in the asia-northeast1 region, use `region-asia-northeast1.INFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_FOLDER`\n\nFor a list of available regions, see [Dataset locations](/bigquery/docs/locations).\n\nExamples\n--------\n\n##### Example 1: Recent BigQuery Storage Write API ingestion failures\n\nThe following example calculates the per minute breakdown of total failed\nrequests for all tables in the project's folder in the last 30 minutes, split by\nstream type and error code: \n\n```googlesql\nSELECT\n start_timestamp,\n stream_type,\n error_code,\n SUM(total_requests) AS num_failed_requests\nFROM\n `region-us`.INFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_FOLDER\nWHERE\n error_code != 'OK'\n AND start_timestamp \u003e TIMESTAMP_SUB(CURRENT_TIMESTAMP, INTERVAL 30 MINUTE)\nGROUP BY\n start_timestamp,\n stream_type,\n error_code\nORDER BY\n start_timestamp DESC;\n```\n| **Note:** `INFORMATION_SCHEMA` view names are case-sensitive.\n\nThe result is similar to the following: \n\n```\n+---------------------+-------------+------------------+---------------------+\n| start_timestamp | stream_type | error_code | num_failed_requests |\n+---------------------+-------------+------------------+---------------------+\n| 2023-02-24 00:25:00 | PENDING | NOT_FOUND | 5 |\n| 2023-02-24 00:25:00 | DEFAULT | INVALID_ARGUMENT | 1 |\n| 2023-02-24 00:25:00 | DEFAULT | DEADLINE_EXCEEDED| 4 |\n| 2023-02-24 00:24:00 | PENDING | INTERNAL | 3 |\n| 2023-02-24 00:24:00 | DEFAULT | INVALID_ARGUMENT | 1 |\n| 2023-02-24 00:24:00 | DEFAULT | DEADLINE_EXCEEDED| 2 |\n+---------------------+-------------+------------------+---------------------+\n```\n\n##### Example 2: Per minute breakdown for all requests with error codes\n\nThe following example calculates a per minute breakdown of successful and failed\nappend requests in the project's folder, split into error code categories.\nThis query could be used to populate a dashboard. \n\n```googlesql\nSELECT\n start_timestamp,\n SUM(total_requests) AS total_requests,\n SUM(total_rows) AS total_rows,\n SUM(total_input_bytes) AS total_input_bytes,\n SUM(\n IF(\n error_code IN (\n 'INVALID_ARGUMENT', 'NOT_FOUND', 'CANCELLED', 'RESOURCE_EXHAUSTED',\n 'ALREADY_EXISTS', 'PERMISSION_DENIED', 'UNAUTHENTICATED',\n 'FAILED_PRECONDITION', 'OUT_OF_RANGE'),\n total_requests,\n 0)) AS user_error,\n SUM(\n IF(\n error_code IN (\n 'DEADLINE_EXCEEDED','ABORTED', 'INTERNAL', 'UNAVAILABLE',\n 'DATA_LOSS', 'UNKNOWN'),\n total_requests,\n 0)) AS server_error,\n SUM(IF(error_code = 'OK', 0, total_requests)) AS total_error,\nFROM\n `region-us`.INFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_FOLDER\nGROUP BY\n start_timestamp\nORDER BY\n start_timestamp DESC;\n```\n| **Note:** `INFORMATION_SCHEMA` view names are case-sensitive.\n\nThe result is similar to the following: \n\n```\n+---------------------+----------------+------------+-------------------+------------+--------------+-------------+\n| start_timestamp | total_requests | total_rows | total_input_bytes | user_error | server_error | total_error |\n+---------------------+----------------+------------+-------------------+------------+--------------+-------------+\n| 2020-04-15 22:00:00 | 441854 | 441854 | 23784853118 | 0 | 17 | 17 |\n| 2020-04-15 21:59:00 | 355627 | 355627 | 26101982742 | 8 | 0 | 13 |\n| 2020-04-15 21:58:00 | 354603 | 354603 | 26160565341 | 0 | 0 | 0 |\n| 2020-04-15 21:57:00 | 298823 | 298823 | 23877821442 | 2 | 0 | 2 |\n+---------------------+----------------+------------+-------------------+------------+--------------+-------------+\n```\n\n##### Example 3: Tables with the most incoming traffic\n\nThe following example returns the BigQuery Storage Write API ingestion statistics for the 10 tables in the project's folder with the most incoming traffic: \n\n```googlesql\nSELECT\n project_id,\n dataset_id,\n table_id,\n SUM(total_rows) AS num_rows,\n SUM(total_input_bytes) AS num_bytes,\n SUM(total_requests) AS num_requests\nFROM\n `region-us`.INFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_FOLDER\nGROUP BY\n project_id,\n dataset_id,\n table_id\nORDER BY\n num_bytes DESC\nLIMIT 10;\n```\n| **Note:** `INFORMATION_SCHEMA` view names are case-sensitive.\n\nThe result is similar to the following: \n\n```\n+----------------------+------------+-------------------------------+------------+----------------+--------------+\n| project_id | dataset_id | table_id | num_rows | num_bytes | num_requests |\n+----------------------+------------+-------------------------------+------------+----------------+--------------+\n| my-project1 | dataset1 | table1 | 8016725532 | 73787301876979 | 8016725532 |\n| my-project2 | dataset1 | table2 | 26319580 | 34199853725409 | 26319580 |\n| my-project1 | dataset2 | table1 | 38355294 | 22879180658120 | 38355294 |\n| my-project3 | dataset1 | table3 | 270126906 | 17594235226765 | 270126906 |\n| my-project2 | dataset2 | table2 | 95511309 | 17376036299631 | 95511309 |\n| my-project2 | dataset2 | table3 | 46500443 | 12834920497777 | 46500443 |\n| my-project3 | dataset2 | table4 | 25846270 | 7487917957360 | 25846270 |\n| my-project4 | dataset1 | table4 | 18318404 | 5665113765882 | 18318404 |\n| my-project4 | dataset1 | table5 | 42829431 | 5343969665771 | 42829431 |\n| my-project4 | dataset1 | table6 | 8771021 | 5119004622353 | 8771021 |\n+----------------------+------------+-------------------------------+------------+----------------+--------------+\n```"]]