INFORMATION_SCHEMA Write API ビューにクエリを実行すると、BigQuery Storage Write API による BigQuery へのデータ取り込みに関する履歴情報とリアルタイム情報を取得できます。詳細については、BigQuery Storage Write API をご覧ください。
必要な権限
INFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_ORGANIZATION ビューをクエリするには、プロジェクトの bigquery.tables.list Identity and Access Management(IAM)権限が必要です。
[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["わかりにくい","hardToUnderstand","thumb-down"],["情報またはサンプルコードが不正確","incorrectInformationOrSampleCode","thumb-down"],["必要な情報 / サンプルがない","missingTheInformationSamplesINeed","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-09-04 UTC。"],[[["\u003cp\u003eThe \u003ccode\u003eINFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_ORGANIZATION\u003c/code\u003e view provides per-minute aggregated statistics on data ingestion using the BigQuery Storage Write API for an entire organization.\u003c/p\u003e\n"],["\u003cp\u003eTo query this view, you need the \u003ccode\u003ebigquery.tables.list\u003c/code\u003e IAM permission, which is included in roles such as \u003ccode\u003eroles/bigquery.admin\u003c/code\u003e, \u003ccode\u003eroles/bigquery.user\u003c/code\u003e, and others, but is \u003cem\u003enot\u003c/em\u003e included in basic Owner or Editor roles.\u003c/p\u003e\n"],["\u003cp\u003eThis view's schema includes details like \u003ccode\u003estart_timestamp\u003c/code\u003e, \u003ccode\u003eproject_id\u003c/code\u003e, \u003ccode\u003edataset_id\u003c/code\u003e, \u003ccode\u003etable_id\u003c/code\u003e, \u003ccode\u003estream_type\u003c/code\u003e, \u003ccode\u003eerror_code\u003c/code\u003e, and various aggregated metrics (total_requests, total_rows, total_input_bytes).\u003c/p\u003e\n"],["\u003cp\u003eThe view retains data for the past 180 days, allowing historical analysis of streaming ingestion.\u003c/p\u003e\n"],["\u003cp\u003eQueries against the \u003ccode\u003eWRITE_API_TIMELINE_BY_ORGANIZATION\u003c/code\u003e must include a region qualifier in its syntax, such as \u003ccode\u003eregion-us\u003c/code\u003e or \u003ccode\u003eregion-eu\u003c/code\u003e, and the location of query execution must match the region of the view.\u003c/p\u003e\n"]]],[],null,["# WRITE_API_TIMELINE_BY_ORGANIZATION view\n=======================================\n\nThe `INFORMATION_SCHEMA.STREAMING_TIMELINE_BY_ORGANIZATION` view contains per\nminute aggregated streaming statistics for the whole organization associated\nwith the current project.\n\nYou can query the `INFORMATION_SCHEMA` Write API views\nto retrieve historical and real-time information about data ingestion into\nBigQuery that uses the BigQuery Storage Write API. See [BigQuery Storage Write API](/bigquery/docs/write-api) for more information.\n\nRequired permission\n-------------------\n\nTo query the `INFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_ORGANIZATION` view, you need\nthe `bigquery.tables.list` Identity and Access Management (IAM) permission for the\norganization.\n\nEach of the following predefined IAM roles includes the required\npermission:\n\n- `roles/bigquery.admin`\n- `roles/bigquery.user`\n- `roles/bigquery.dataViewer`\n- `roles/bigquery.dataEditor`\n- `roles/bigquery.dataOwner`\n- `roles/bigquery.metadataViewer`\n- `roles/bigquery.resourceAdmin`\n\n| **Caution:** The required \\`bigquery.tables.list\\` permission is *not* included in the [basic roles](/bigquery/docs/access-control-basic-roles) Owner or Editor.\n\nFor more information about BigQuery permissions, see\n[Access control with IAM](/bigquery/docs/access-control).\n\nSchema\n------\n\nWhen you query the `INFORMATION_SCHEMA` BigQuery Storage Write API views, the query results contain historical and real-time information about data ingestion into\nBigQuery using the BigQuery Storage Write API. Each row in the following views represents statistics for ingestion into a specific table, aggregated over\na one minute interval starting at `start_timestamp`. Statistics are grouped by stream type and error code, so there will be one row for each stream type and\neach encountered error code during the one minute interval for each timestamp\nand table combination. Successful requests have the error code set to `OK`. If\nno data was ingested into a table during a certain time period, then no rows are present for the corresponding timestamps for that table.\n\nThe `INFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_*` views have the\nfollowing schema:\n\nData retention\n--------------\n\nThis view contains the streaming history of the past 180 days.\n\nScope and syntax\n----------------\n\nQueries against this view must include a [region qualifier](/bigquery/docs/information-schema-intro#syntax).\nIf you do not specify a regional qualifier, metadata is retrieved from all\nregions. The following table explains the region scope for this view:\n\nReplace the following:\n\n- Optional: \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e: the ID of your Google Cloud project. If not specified, the default project is used.\n- \u003cvar translate=\"no\"\u003eREGION\u003c/var\u003e: any [dataset region name](/bigquery/docs/locations). For example, ```region-us```.\n\n \u003cbr /\u003e\n\n \u003cbr /\u003e\n\n | **Note:** You must use [a region qualifier](/bigquery/docs/information-schema-intro#region_qualifier) to query `INFORMATION_SCHEMA` views. The location of the query execution must match the region of the `INFORMATION_SCHEMA` view.\n\n\u003cbr /\u003e\n\n**Example**\n\n- To query data in the US multi-region, use ```region-us`.INFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_ORGANIZATION``\n- To query data in the EU multi-region, use ```region-eu`.INFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_ORGANIZATION``\n- To query data in the asia-northeast1 region, use ```region-asia-northeast1`.INFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_ORGANIZATION``\n\nFor a list of available regions, see [Dataset locations](/bigquery/docs/locations).\n\nExamples\n--------\n\n##### Example 1: Recent BigQuery Storage Write API ingestion failures\n\nThe following example calculates the per minute breakdown of total failed\nrequests for all tables in the project's organization in the last 30 minutes,\nsplit by error code: \n\n```googlesql\nSELECT\n start_timestamp,\n stream_type,\n error_code,\n SUM(total_requests) AS num_failed_requests\nFROM\n `region-us`.INFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_ORGANIZATION\nWHERE\n error_code != 'OK'\n AND start_timestamp \u003e TIMESTAMP_SUB(CURRENT_TIMESTAMP, INTERVAL 30 MINUTE)\nGROUP BY\n start_timestamp,\n stream_type,\n error_code\nORDER BY\n start_timestamp DESC;\n```\n| **Note:** `INFORMATION_SCHEMA` view names are case-sensitive.\n\nThe result is similar to the following: \n\n```\n+---------------------+-------------+------------------+---------------------+\n| start_timestamp | stream_type | error_code | num_failed_requests |\n+---------------------+-------------+------------------+---------------------+\n| 2023-02-24 00:25:00 | PENDING | NOT_FOUND | 5 |\n| 2023-02-24 00:25:00 | DEFAULT | INVALID_ARGUMENT | 1 |\n| 2023-02-24 00:25:00 | DEFAULT | DEADLINE_EXCEEDED| 4 |\n| 2023-02-24 00:24:00 | PENDING | INTERNAL | 3 |\n| 2023-02-24 00:24:00 | DEFAULT | INVALID_ARGUMENT | 1 |\n| 2023-02-24 00:24:00 | DEFAULT | DEADLINE_EXCEEDED| 2 |\n+---------------------+-------------+------------------+---------------------+\n```\n\n##### Example 2: Per minute breakdown for all requests with error codes\n\nThe following example calculates a per minute breakdown of successful and failed\nappend requests in the project's organization, split into error code\ncategories. This query could be used to populate a dashboard. \n\n```googlesql\nSELECT\n start_timestamp,\n SUM(total_requests) AS total_requests,\n SUM(total_rows) AS total_rows,\n SUM(total_input_bytes) AS total_input_bytes,\n SUM(\n IF(\n error_code IN (\n 'INVALID_ARGUMENT', 'NOT_FOUND', 'CANCELLED', 'RESOURCE_EXHAUSTED',\n 'ALREADY_EXISTS', 'PERMISSION_DENIED', 'UNAUTHENTICATED',\n 'FAILED_PRECONDITION', 'OUT_OF_RANGE'),\n total_requests,\n 0)) AS user_error,\n SUM(\n IF(\n error_code IN (\n 'DEADLINE_EXCEEDED','ABORTED', 'INTERNAL', 'UNAVAILABLE',\n 'DATA_LOSS', 'UNKNOWN'),\n total_requests,\n 0)) AS server_error,\n SUM(IF(error_code = 'OK', 0, total_requests)) AS total_error,\nFROM\n `region-us`.INFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_FOLDER\nGROUP BY\n start_timestamp\nORDER BY\n start_timestamp DESC;\n```\n| **Note:** `INFORMATION_SCHEMA` view names are case-sensitive.\n\nThe result is similar to the following: \n\n```\n+---------------------+----------------+------------+-------------------+-------------+------------+--------------+-------------+\n| start_timestamp | total_requests | total_rows | total_input_bytes | quota_error | user_error | server_error | total_error |\n+---------------------+----------------+------------+-------------------+-------------+------------+--------------+-------------+\n| 2020-04-15 22:00:00 | 441854 | 441854 | 23784853118 | 0 | 0 | 17 | 17 |\n| 2020-04-15 21:59:00 | 355627 | 355627 | 26101982742 | 5 | 8 | 0 | 13 |\n| 2020-04-15 21:58:00 | 354603 | 354603 | 26160565341 | 0 | 0 | 0 | 0 |\n| 2020-04-15 21:57:00 | 298823 | 298823 | 23877821442 | 0 | 2 | 0 | 2 |\n+---------------------+----------------+------------+-------------------+-------------+------------+--------------+-------------+\n```\n\n##### Example 3: Tables with the most incoming traffic\n\nThe following example returns the BigQuery Storage Write API ingestion statistics for the 10 tables in the project's organization with the most incoming traffic: \n\n```googlesql\nSELECT\n project_id,\n dataset_id,\n table_id,\n SUM(total_rows) AS num_rows,\n SUM(total_input_bytes) AS num_bytes,\n SUM(total_requests) AS num_requests\nFROM\n `region-us`.INFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_ORGANIZATION\nGROUP BY\n project_id,\n dataset_id,\n table_id\nORDER BY\n num_bytes DESC\nLIMIT 10;\n```\n| **Note:** `INFORMATION_SCHEMA` view names are case-sensitive.\n\nThe result is similar to the following: \n\n```\n+----------------------+------------+-------------------------------+------------+----------------+--------------+\n| project_id | dataset_id | table_id | num_rows | num_bytes | num_requests |\n+----------------------+------------+-------------------------------+------------+----------------+--------------+\n| my-project1 | dataset1 | table1 | 8016725532 | 73787301876979 | 8016725532 |\n| my-project2 | dataset1 | table2 | 26319580 | 34199853725409 | 26319580 |\n| my-project1 | dataset2 | table1 | 38355294 | 22879180658120 | 38355294 |\n| my-project3 | dataset1 | table3 | 270126906 | 17594235226765 | 270126906 |\n| my-project2 | dataset2 | table2 | 95511309 | 17376036299631 | 95511309 |\n| my-project2 | dataset2 | table3 | 46500443 | 12834920497777 | 46500443 |\n| my-project3 | dataset2 | table4 | 25846270 | 7487917957360 | 25846270 |\n| my-project4 | dataset1 | table4 | 18318404 | 5665113765882 | 18318404 |\n| my-project4 | dataset1 | table5 | 42829431 | 5343969665771 | 42829431 |\n| my-project4 | dataset1 | table6 | 8771021 | 5119004622353 | 8771021 |\n+----------------------+------------+-------------------------------+------------+----------------+--------------+\n```"]]