Metrics Explorer を使用して、BigQuery の使用状況を表示できます。BigQuery の使用量が事前定義された上限を超えた場合に通知するアラート ポリシーを作成することもできます。次の表に、アラート ポリシーを作成する場合の設定を示します。グラフを作成するときや、Metrics Explorer を使用するときは、ターゲット ペイン表の設定を使用できます。
[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["わかりにくい","hardToUnderstand","thumb-down"],["情報またはサンプルコードが不正確","incorrectInformationOrSampleCode","thumb-down"],["必要な情報 / サンプルがない","missingTheInformationSamplesINeed","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-09-03 UTC。"],[],[],null,["# Trace data exports overview\n\n| **Beta**\n|\n|\n| This product or feature is subject to the \"Pre-GA Offerings Terms\" in the General Service Terms section\n| of the [Service Specific Terms](/terms/service-terms#1).\n|\n| Pre-GA products and features are available \"as is\" and might have limited support.\n|\n| For more information, see the\n| [launch stage descriptions](/products#product-launch-stages).\n\nThis page provides a conceptual overview of exporting trace data\nusing Cloud Trace. You might want to export trace data for the following\nreasons:\n\n- To store trace data for a period longer than the default retention period of 30 days.\n- To let you use BigQuery tools to analyze your trace data. For\n example, using BigQuery, you can identify span counts and\n quantiles. For information on the query used to generate the following\n table, see [HipsterShop query](/trace/docs/trace-export-bigquery#hipstershop-query).\n\nHow exports work\n----------------\n\nExporting involves creating a *sink* for a Google Cloud project.\nA *sink* defines a BigQuery dataset as the destination.\n\nYou can create a sink by using the Cloud Trace API or by using the\nGoogle Cloud CLI.\n\nSink properties and terminology\n-------------------------------\n\nSinks are defined for a Google Cloud project and have the following\nproperties:\n\n- **Name**: A name for the sink. For example, a name might be:\n\n ```\n \"projects/PROJECT_NUMBER/traceSinks/my-sink\"\n ```\n\n where \u003cvar translate=\"no\"\u003ePROJECT_NUMBER\u003c/var\u003e is the sink's Google Cloud project number\n and `my-sink` is the sink identifier.\n- **Parent**: The resource in which you create the sink. The parent must be\n a Google Cloud project:\n\n ```\n \"projects/PROJECT_ID\"\n ```\n\n The \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e can either be a Google Cloud project identifier or\n number.\n- **Destination**: A single place to send trace spans.\n Trace supports exporting traces to BigQuery.\n The destination can be sink's Google Cloud project or\n any other Google Cloud project that is in the same organization.\n\n For example, a valid destination is: \n\n ```\n bigquery.googleapis.com/projects/DESTINATION_PROJECT_NUMBER/datasets/DATASET_ID\n ```\n\n where \u003cvar translate=\"no\"\u003eDESTINATION_PROJECT_NUMBER\u003c/var\u003e is the\n Google Cloud project number of the destination, and\n \u003cvar translate=\"no\"\u003eDATASET_ID\u003c/var\u003e is the BigQuery dataset identifier.\n- **Writer Identity**: A service account name. The export destination's owner\n must give this service account permissions to write to the export\n destination. When exporting traces, Trace adopts this\n identity for authorization. For increased security, new sinks get a unique\n service account:\n\n ```\n export-PROJECT_NUMBER-GENERATED_VALUE@gcp-sa-cloud-trace.iam.gserviceaccount.com\n ```\n\n where \u003cvar translate=\"no\"\u003ePROJECT_NUMBER\u003c/var\u003e is your Google Cloud project number, in Hex,\n and \u003cvar translate=\"no\"\u003eGENERATED_VALUE\u003c/var\u003e is a randomly generated value.\n\n You don't create, own, or manage the service account that is identified by\n the writer identity of a sink. When you create a sink,\n Trace creates the service account that the sink requires.\n This service account isn't included in the list of service accounts for\n your project until it has at least one Identity and Access Management binding. You add this\n binding when you configure a sink destination.\n\n For information on using the writer identity, see\n [destination permissions](/trace/docs/trace-export-configure#dest-auth).\n\nHow sinks work\n--------------\n\nEvery time a trace span arrives in a project, Trace\nexports a copy of the span.\n\nTraces that Trace received before the sink was created\ncannot be exported.\n\nAccess control\n--------------\n\nTo create or modify a sink, you must have one of the following Identity and Access Management\nroles:\n\n- Trace Admin\n- Trace User\n- Project Owner\n- Project Editor\n\nFor more information, see [Access control](/logging/docs/access-control).\n\nTo export traces to a destination, the sink's writer service account\nmust be permitted to write to the destination. For more information about writer\nidentities, see [Sink properties](#sink-terms) on this page.\n\nQuotas and limits\n-----------------\n\nCloud Trace utilizes the\n[BigQuery streaming API](/bigquery/quotas#streaming_inserts)\nto send trace spans to the destination. Cloud Trace batches API calls.\nCloud Trace doesn't implement a retry or throttling mechanism. Trace spans\nmight not be exported successfully if the amount of data exceeds the\ndestination quotas.\n\nFor details on BigQuery quotas and limits, see\n[Quotas and limits](/bigquery/quotas).\n\nPricing\n-------\n\nExporting traces doesn't incur Cloud Trace charges. However, you might\nincur BigQuery charges.\nSee [BigQuery pricing](https://cloud.google.com/bigquery/pricing) for more information.\n\n### Estimating your costs\n\nBigQuery charges for data ingestion and storage. To estimate\nyour monthly BigQuery costs, do the following:\n\n1. Estimate the total number of trace spans that are ingested in a month.\n\n For information about how to view usage, see\n [View usage by billing account](/stackdriver/estimating-bills#billing-acct-usage).\n2. Estimate the streaming requirements based on the number of trace spans\n ingested.\n\n Each span is written to a table row. Each row in BigQuery requires\n at least 1024 bytes. Therefore, a *lower bound* on your\n BigQuery streaming requirements is to\n assign 1024 bytes to each span. For example, if your Google Cloud\n project ingested 200 spans, then those spans require at least\n 20,400 bytes for the streaming insert.\n3. Use the [Pricing calculator](https://cloud.google.com/products/calculator/) to estimate your\n BigQuery costs due to storage, streaming inserts, and queries.\n\n### Viewing and managing your BigQuery usage\n\nYou can use Metrics Explorer to view your BigQuery usage. You can\nalso create an alerting policy that notifies you if your BigQuery\nusage exceeds predefined limits. The following table contains the settings\nto create an alerting policy. You can use the settings in the target pane\ntable when creating a chart or when using\n[Metrics Explorer](/monitoring/charts/metrics-explorer).\n\nTo create an alerting policy that triggers when the ingested\n[BigQuery](/bigquery/docs) metrics exceed a user-defined\nlevel, use the following settings.\n\n#### Steps to create an [alerting policy](/monitoring/alerts/using-alerting-ui#create-policy).\n\nTo create an alerting policy, do the following:\n\n1. In the Google Cloud console, go to the\n *notifications* **Alerting** page:\n\n [Go to **Alerting**](https://console.cloud.google.com/monitoring/alerting)\n\n \u003cbr /\u003e\n\n If you use the search bar to find this page, then select the result whose subheading is\n **Monitoring**.\n2. If you haven't created your notification channels and if you want to be notified, then click **Edit Notification Channels** and add your notification channels. Return to the **Alerting** page after you add your channels.\n3. From the **Alerting** page, select **Create policy**.\n4. To select the resource, metric, and filters, expand the **Select a metric** menu and then use the values in the **New condition** table:\n 1. Optional: To limit the menu to relevant entries, enter the resource or metric name in the filter bar.\n 2. Select a **Resource type** . For example, select **VM instance**.\n 3. Select a **Metric category** . For example, select **instance**.\n 4. Select a **Metric** . For example, select **CPU Utilization**.\n 5. Select **Apply**.\n5. Click **Next** and then configure the alerting policy trigger. To complete these fields, use the values in the **Configure alert trigger** table.\n6. Click **Next**.\n7. Optional: To add notifications to your alerting policy, click\n **Notification channels** . In the dialog, select one or more notification\n channels from the menu, and then click **OK**.\n\n To be notified when incidents are openend and closed, check\n **Notify on incident closure**. By default, notifications are sent only when\n incidents are openend.\n8. Optional: Update the **Incident autoclose duration**. This field determines when Monitoring closes incidents in the absence of metric data.\n9. Optional: Click **Documentation**, and then add any information that you want included in a notification message.\n10. Click **Alert name** and enter a name for the alerting policy.\n11. Click **Create Policy**.\n\nWhat's next\n-----------\n\nTo configure a sink, see [Exporting traces](/trace/docs/trace-export-configure)."]]