[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-04。"],[[["\u003cp\u003eThe \u003ccode\u003eINFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_FOLDER\u003c/code\u003e view provides per-minute aggregated statistics on BigQuery Storage Write API data ingestion for a project's parent folder and its subfolders.\u003c/p\u003e\n"],["\u003cp\u003eQuerying this view requires the \u003ccode\u003ebigquery.tables.list\u003c/code\u003e IAM permission, which is included in roles like \u003ccode\u003ebigquery.admin\u003c/code\u003e, \u003ccode\u003ebigquery.user\u003c/code\u003e, and \u003ccode\u003ebigquery.dataViewer\u003c/code\u003e, but not in basic Owner or Editor roles.\u003c/p\u003e\n"],["\u003cp\u003eEach row in the view represents one-minute interval statistics for ingestion into a specific table, grouped by stream type and error code, and successful requests are marked with the error code \u003ccode\u003eOK\u003c/code\u003e.\u003c/p\u003e\n"],["\u003cp\u003eThe view's schema includes columns such as \u003ccode\u003estart_timestamp\u003c/code\u003e, \u003ccode\u003eproject_id\u003c/code\u003e, \u003ccode\u003etable_id\u003c/code\u003e, \u003ccode\u003estream_type\u003c/code\u003e, \u003ccode\u003eerror_code\u003c/code\u003e, \u003ccode\u003etotal_requests\u003c/code\u003e, \u003ccode\u003etotal_rows\u003c/code\u003e, and \u003ccode\u003etotal_input_bytes\u003c/code\u003e to provide detailed ingestion metrics.\u003c/p\u003e\n"],["\u003cp\u003eThe view retains up to 180 days of BigQuery Storage Write API ingestion history, and you must specify a region qualifier in your queries, matching the region of the \u003ccode\u003eINFORMATION_SCHEMA\u003c/code\u003e view.\u003c/p\u003e\n"]]],[],null,["# WRITE_API_TIMELINE_BY_FOLDER view\n=================================\n\nThe `INFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_FOLDER` view contains per minute\naggregated BigQuery Storage Write API ingestion statistics for the parent folder of the current project, including its subfolders.\n\nYou can query the `INFORMATION_SCHEMA` Write API views\nto retrieve historical and real-time information about data ingestion into\nBigQuery that uses the BigQuery Storage Write API. See [BigQuery Storage Write API](/bigquery/docs/write-api) for more information.\n\nRequired permission\n-------------------\n\nTo query the `INFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_FOLDER` view, you need\nthe `bigquery.tables.list` Identity and Access Management (IAM) permission for the parent\nfolder of the project.\n\nEach of the following predefined IAM roles includes the preceding\npermission:\n\n- `roles/bigquery.admin`\n- `roles/bigquery.user`\n- `roles/bigquery.dataViewer`\n- `roles/bigquery.dataEditor`\n- `roles/bigquery.dataOwner`\n- `roles/bigquery.metadataViewer`\n- `roles/bigquery.resourceAdmin`\n\n| **Caution:** The required \\`bigquery.tables.list\\` permission is *not* included in the [basic roles](/bigquery/docs/access-control-basic-roles) Owner or Editor.\n\nFor more information about BigQuery permissions, see\n[Access control with IAM](/bigquery/docs/access-control).\n\nSchema\n------\n\nWhen you query the `INFORMATION_SCHEMA` BigQuery Storage Write API views, the query results contain historical and real-time information about data ingestion into\nBigQuery using the BigQuery Storage Write API. Each row in the following views represents statistics for ingestion into a specific table, aggregated over\na one minute interval starting at `start_timestamp`. Statistics are grouped by stream type and error code, so there will be one row for each stream type and\neach encountered error code during the one minute interval for each timestamp\nand table combination. Successful requests have the error code set to `OK`. If\nno data was ingested into a table during a certain time period, then no rows are present for the corresponding timestamps for that table.\n\nThe `INFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_*` views have the\nfollowing schema:\n\nData retention\n--------------\n\nThis view contains the BigQuery Storage Write API ingestion history of the past 180 days.\n\nScope and syntax\n----------------\n\nQueries against this view must include a [region qualifier](/bigquery/docs/information-schema-intro#syntax).\nIf you do not specify a regional qualifier, metadata is retrieved from all\nregions. The following table explains the region scope for this view:\n\nReplace the following:\n\n- Optional: \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e: the ID of your Google Cloud project. If not specified, the default project is used.\n- \u003cvar translate=\"no\"\u003eREGION\u003c/var\u003e: any [dataset region name](/bigquery/docs/locations). For example, ```region-us```.\n\n \u003cbr /\u003e\n\n \u003cbr /\u003e\n\n | **Note:** You must use [a region qualifier](/bigquery/docs/information-schema-intro#region_qualifier) to query `INFORMATION_SCHEMA` views. The location of the query execution must match the region of the `INFORMATION_SCHEMA` view.\n\n\u003cbr /\u003e\n\n**Example**\n\n- To query data in the US multi-region, use `region-us.INFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_FOLDER`\n- To query data in the EU multi-region, use `region-eu.INFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_FOLDER`\n- To query data in the asia-northeast1 region, use `region-asia-northeast1.INFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_FOLDER`\n\nFor a list of available regions, see [Dataset locations](/bigquery/docs/locations).\n\nExamples\n--------\n\n##### Example 1: Recent BigQuery Storage Write API ingestion failures\n\nThe following example calculates the per minute breakdown of total failed\nrequests for all tables in the project's folder in the last 30 minutes, split by\nstream type and error code: \n\n```googlesql\nSELECT\n start_timestamp,\n stream_type,\n error_code,\n SUM(total_requests) AS num_failed_requests\nFROM\n `region-us`.INFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_FOLDER\nWHERE\n error_code != 'OK'\n AND start_timestamp \u003e TIMESTAMP_SUB(CURRENT_TIMESTAMP, INTERVAL 30 MINUTE)\nGROUP BY\n start_timestamp,\n stream_type,\n error_code\nORDER BY\n start_timestamp DESC;\n```\n| **Note:** `INFORMATION_SCHEMA` view names are case-sensitive.\n\nThe result is similar to the following: \n\n```\n+---------------------+-------------+------------------+---------------------+\n| start_timestamp | stream_type | error_code | num_failed_requests |\n+---------------------+-------------+------------------+---------------------+\n| 2023-02-24 00:25:00 | PENDING | NOT_FOUND | 5 |\n| 2023-02-24 00:25:00 | DEFAULT | INVALID_ARGUMENT | 1 |\n| 2023-02-24 00:25:00 | DEFAULT | DEADLINE_EXCEEDED| 4 |\n| 2023-02-24 00:24:00 | PENDING | INTERNAL | 3 |\n| 2023-02-24 00:24:00 | DEFAULT | INVALID_ARGUMENT | 1 |\n| 2023-02-24 00:24:00 | DEFAULT | DEADLINE_EXCEEDED| 2 |\n+---------------------+-------------+------------------+---------------------+\n```\n\n##### Example 2: Per minute breakdown for all requests with error codes\n\nThe following example calculates a per minute breakdown of successful and failed\nappend requests in the project's folder, split into error code categories.\nThis query could be used to populate a dashboard. \n\n```googlesql\nSELECT\n start_timestamp,\n SUM(total_requests) AS total_requests,\n SUM(total_rows) AS total_rows,\n SUM(total_input_bytes) AS total_input_bytes,\n SUM(\n IF(\n error_code IN (\n 'INVALID_ARGUMENT', 'NOT_FOUND', 'CANCELLED', 'RESOURCE_EXHAUSTED',\n 'ALREADY_EXISTS', 'PERMISSION_DENIED', 'UNAUTHENTICATED',\n 'FAILED_PRECONDITION', 'OUT_OF_RANGE'),\n total_requests,\n 0)) AS user_error,\n SUM(\n IF(\n error_code IN (\n 'DEADLINE_EXCEEDED','ABORTED', 'INTERNAL', 'UNAVAILABLE',\n 'DATA_LOSS', 'UNKNOWN'),\n total_requests,\n 0)) AS server_error,\n SUM(IF(error_code = 'OK', 0, total_requests)) AS total_error,\nFROM\n `region-us`.INFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_FOLDER\nGROUP BY\n start_timestamp\nORDER BY\n start_timestamp DESC;\n```\n| **Note:** `INFORMATION_SCHEMA` view names are case-sensitive.\n\nThe result is similar to the following: \n\n```\n+---------------------+----------------+------------+-------------------+------------+--------------+-------------+\n| start_timestamp | total_requests | total_rows | total_input_bytes | user_error | server_error | total_error |\n+---------------------+----------------+------------+-------------------+------------+--------------+-------------+\n| 2020-04-15 22:00:00 | 441854 | 441854 | 23784853118 | 0 | 17 | 17 |\n| 2020-04-15 21:59:00 | 355627 | 355627 | 26101982742 | 8 | 0 | 13 |\n| 2020-04-15 21:58:00 | 354603 | 354603 | 26160565341 | 0 | 0 | 0 |\n| 2020-04-15 21:57:00 | 298823 | 298823 | 23877821442 | 2 | 0 | 2 |\n+---------------------+----------------+------------+-------------------+------------+--------------+-------------+\n```\n\n##### Example 3: Tables with the most incoming traffic\n\nThe following example returns the BigQuery Storage Write API ingestion statistics for the 10 tables in the project's folder with the most incoming traffic: \n\n```googlesql\nSELECT\n project_id,\n dataset_id,\n table_id,\n SUM(total_rows) AS num_rows,\n SUM(total_input_bytes) AS num_bytes,\n SUM(total_requests) AS num_requests\nFROM\n `region-us`.INFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_FOLDER\nGROUP BY\n project_id,\n dataset_id,\n table_id\nORDER BY\n num_bytes DESC\nLIMIT 10;\n```\n| **Note:** `INFORMATION_SCHEMA` view names are case-sensitive.\n\nThe result is similar to the following: \n\n```\n+----------------------+------------+-------------------------------+------------+----------------+--------------+\n| project_id | dataset_id | table_id | num_rows | num_bytes | num_requests |\n+----------------------+------------+-------------------------------+------------+----------------+--------------+\n| my-project1 | dataset1 | table1 | 8016725532 | 73787301876979 | 8016725532 |\n| my-project2 | dataset1 | table2 | 26319580 | 34199853725409 | 26319580 |\n| my-project1 | dataset2 | table1 | 38355294 | 22879180658120 | 38355294 |\n| my-project3 | dataset1 | table3 | 270126906 | 17594235226765 | 270126906 |\n| my-project2 | dataset2 | table2 | 95511309 | 17376036299631 | 95511309 |\n| my-project2 | dataset2 | table3 | 46500443 | 12834920497777 | 46500443 |\n| my-project3 | dataset2 | table4 | 25846270 | 7487917957360 | 25846270 |\n| my-project4 | dataset1 | table4 | 18318404 | 5665113765882 | 18318404 |\n| my-project4 | dataset1 | table5 | 42829431 | 5343969665771 | 42829431 |\n| my-project4 | dataset1 | table6 | 8771021 | 5119004622353 | 8771021 |\n+----------------------+------------+-------------------------------+------------+----------------+--------------+\n```"]]