여기서 PROJECT_NUMBER는 16진수의 Google Cloud 프로젝트 번호이고 GENERATED_VALUE는 임의로 생성된 값입니다.
싱크의 작성자 ID로 식별된 서비스 계정을 만들거나 소유하거나 관리하지 않습니다. 싱크를 만들면 Trace에서 싱크에 필요한 서비스 계정을 만듭니다.
이 서비스 계정은 Identity and Access Management 바인딩이 최소 하나 이상이 있을 때까지 프로젝트의 서비스 계정 목록에 포함되지 않습니다. 이 바인딩은 싱크 대상을 구성할 때 추가합니다.
trace를 대상으로 내보내려면 싱크의 작성자 서비스 계정에서 해당 대상에 대한 쓰기 권한이 부여되어야 합니다. 작성자 ID에 대한 자세한 내용은 이 페이지의 싱크 속성을 참조하세요.
할당량 및 한도
Cloud Trace에서는 BigQuery 스트리밍 API를 활용하여 Trace 스팬을 대상에 보냅니다. Cloud Trace에서는 API 호출을 일괄 처리합니다.
Cloud Trace에서는 재시도 또는 제한 메커니즘을 구현하지 않습니다. 데이터 양이 대상 할당량을 초과하는 경우 Trace 스팬 내보내기에 실패할 수 있습니다.
각 스팬은 테이블 행에 기록됩니다. BigQuery의 각 행에는 최소 1024바이트가 필요합니다. 따라서 BigQuery 스트리밍 요구사항의 하한값은 각 스팬에 1024바이트를 할당하는 것입니다. 예를 들어 Google Cloud 프로젝트가 200개의 스팬을 수집한 경우 해당 스팬은 스트리밍 삽입에 최소 20,400바이트가 필요합니다.
가격 계산기를 사용하여 스토리지, 스트리밍 삽입, 쿼리로 인한 BigQuery 비용을 예상해 보세요.
BigQuery 사용량 보기 및 관리
측정 항목 탐색기를 사용하여 BigQuery 사용량을 볼 수 있습니다. 또한 BigQuery 사용량이 사전 정의된 한도를 초과하는 경우 이를 알리는 알림 정책을 만들 수 있습니다. 다음 표에는 알림 정책을 만들기 위한 설정이 포함되어 있습니다. 차트를 만들거나 측정항목 탐색기를 사용할 때 대상 창 표의 설정을 사용할 수 있습니다.
수집된 BigQuery 측정항목이 사용자 정의 수준을 초과할 때 트리거되는 알림 정책을 만들려면 다음 설정을 사용하세요.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-09-03(UTC)"],[],[],null,["# Trace data exports overview\n\n| **Beta**\n|\n|\n| This product or feature is subject to the \"Pre-GA Offerings Terms\" in the General Service Terms section\n| of the [Service Specific Terms](/terms/service-terms#1).\n|\n| Pre-GA products and features are available \"as is\" and might have limited support.\n|\n| For more information, see the\n| [launch stage descriptions](/products#product-launch-stages).\n\nThis page provides a conceptual overview of exporting trace data\nusing Cloud Trace. You might want to export trace data for the following\nreasons:\n\n- To store trace data for a period longer than the default retention period of 30 days.\n- To let you use BigQuery tools to analyze your trace data. For\n example, using BigQuery, you can identify span counts and\n quantiles. For information on the query used to generate the following\n table, see [HipsterShop query](/trace/docs/trace-export-bigquery#hipstershop-query).\n\nHow exports work\n----------------\n\nExporting involves creating a *sink* for a Google Cloud project.\nA *sink* defines a BigQuery dataset as the destination.\n\nYou can create a sink by using the Cloud Trace API or by using the\nGoogle Cloud CLI.\n\nSink properties and terminology\n-------------------------------\n\nSinks are defined for a Google Cloud project and have the following\nproperties:\n\n- **Name**: A name for the sink. For example, a name might be:\n\n ```\n \"projects/PROJECT_NUMBER/traceSinks/my-sink\"\n ```\n\n where \u003cvar translate=\"no\"\u003ePROJECT_NUMBER\u003c/var\u003e is the sink's Google Cloud project number\n and `my-sink` is the sink identifier.\n- **Parent**: The resource in which you create the sink. The parent must be\n a Google Cloud project:\n\n ```\n \"projects/PROJECT_ID\"\n ```\n\n The \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e can either be a Google Cloud project identifier or\n number.\n- **Destination**: A single place to send trace spans.\n Trace supports exporting traces to BigQuery.\n The destination can be sink's Google Cloud project or\n any other Google Cloud project that is in the same organization.\n\n For example, a valid destination is: \n\n ```\n bigquery.googleapis.com/projects/DESTINATION_PROJECT_NUMBER/datasets/DATASET_ID\n ```\n\n where \u003cvar translate=\"no\"\u003eDESTINATION_PROJECT_NUMBER\u003c/var\u003e is the\n Google Cloud project number of the destination, and\n \u003cvar translate=\"no\"\u003eDATASET_ID\u003c/var\u003e is the BigQuery dataset identifier.\n- **Writer Identity**: A service account name. The export destination's owner\n must give this service account permissions to write to the export\n destination. When exporting traces, Trace adopts this\n identity for authorization. For increased security, new sinks get a unique\n service account:\n\n ```\n export-PROJECT_NUMBER-GENERATED_VALUE@gcp-sa-cloud-trace.iam.gserviceaccount.com\n ```\n\n where \u003cvar translate=\"no\"\u003ePROJECT_NUMBER\u003c/var\u003e is your Google Cloud project number, in Hex,\n and \u003cvar translate=\"no\"\u003eGENERATED_VALUE\u003c/var\u003e is a randomly generated value.\n\n You don't create, own, or manage the service account that is identified by\n the writer identity of a sink. When you create a sink,\n Trace creates the service account that the sink requires.\n This service account isn't included in the list of service accounts for\n your project until it has at least one Identity and Access Management binding. You add this\n binding when you configure a sink destination.\n\n For information on using the writer identity, see\n [destination permissions](/trace/docs/trace-export-configure#dest-auth).\n\nHow sinks work\n--------------\n\nEvery time a trace span arrives in a project, Trace\nexports a copy of the span.\n\nTraces that Trace received before the sink was created\ncannot be exported.\n\nAccess control\n--------------\n\nTo create or modify a sink, you must have one of the following Identity and Access Management\nroles:\n\n- Trace Admin\n- Trace User\n- Project Owner\n- Project Editor\n\nFor more information, see [Access control](/logging/docs/access-control).\n\nTo export traces to a destination, the sink's writer service account\nmust be permitted to write to the destination. For more information about writer\nidentities, see [Sink properties](#sink-terms) on this page.\n\nQuotas and limits\n-----------------\n\nCloud Trace utilizes the\n[BigQuery streaming API](/bigquery/quotas#streaming_inserts)\nto send trace spans to the destination. Cloud Trace batches API calls.\nCloud Trace doesn't implement a retry or throttling mechanism. Trace spans\nmight not be exported successfully if the amount of data exceeds the\ndestination quotas.\n\nFor details on BigQuery quotas and limits, see\n[Quotas and limits](/bigquery/quotas).\n\nPricing\n-------\n\nExporting traces doesn't incur Cloud Trace charges. However, you might\nincur BigQuery charges.\nSee [BigQuery pricing](https://cloud.google.com/bigquery/pricing) for more information.\n\n### Estimating your costs\n\nBigQuery charges for data ingestion and storage. To estimate\nyour monthly BigQuery costs, do the following:\n\n1. Estimate the total number of trace spans that are ingested in a month.\n\n For information about how to view usage, see\n [View usage by billing account](/stackdriver/estimating-bills#billing-acct-usage).\n2. Estimate the streaming requirements based on the number of trace spans\n ingested.\n\n Each span is written to a table row. Each row in BigQuery requires\n at least 1024 bytes. Therefore, a *lower bound* on your\n BigQuery streaming requirements is to\n assign 1024 bytes to each span. For example, if your Google Cloud\n project ingested 200 spans, then those spans require at least\n 20,400 bytes for the streaming insert.\n3. Use the [Pricing calculator](https://cloud.google.com/products/calculator/) to estimate your\n BigQuery costs due to storage, streaming inserts, and queries.\n\n### Viewing and managing your BigQuery usage\n\nYou can use Metrics Explorer to view your BigQuery usage. You can\nalso create an alerting policy that notifies you if your BigQuery\nusage exceeds predefined limits. The following table contains the settings\nto create an alerting policy. You can use the settings in the target pane\ntable when creating a chart or when using\n[Metrics Explorer](/monitoring/charts/metrics-explorer).\n\nTo create an alerting policy that triggers when the ingested\n[BigQuery](/bigquery/docs) metrics exceed a user-defined\nlevel, use the following settings.\n\n#### Steps to create an [alerting policy](/monitoring/alerts/using-alerting-ui#create-policy).\n\nTo create an alerting policy, do the following:\n\n1. In the Google Cloud console, go to the\n *notifications* **Alerting** page:\n\n [Go to **Alerting**](https://console.cloud.google.com/monitoring/alerting)\n\n \u003cbr /\u003e\n\n If you use the search bar to find this page, then select the result whose subheading is\n **Monitoring**.\n2. If you haven't created your notification channels and if you want to be notified, then click **Edit Notification Channels** and add your notification channels. Return to the **Alerting** page after you add your channels.\n3. From the **Alerting** page, select **Create policy**.\n4. To select the resource, metric, and filters, expand the **Select a metric** menu and then use the values in the **New condition** table:\n 1. Optional: To limit the menu to relevant entries, enter the resource or metric name in the filter bar.\n 2. Select a **Resource type** . For example, select **VM instance**.\n 3. Select a **Metric category** . For example, select **instance**.\n 4. Select a **Metric** . For example, select **CPU Utilization**.\n 5. Select **Apply**.\n5. Click **Next** and then configure the alerting policy trigger. To complete these fields, use the values in the **Configure alert trigger** table.\n6. Click **Next**.\n7. Optional: To add notifications to your alerting policy, click\n **Notification channels** . In the dialog, select one or more notification\n channels from the menu, and then click **OK**.\n\n To be notified when incidents are openend and closed, check\n **Notify on incident closure**. By default, notifications are sent only when\n incidents are openend.\n8. Optional: Update the **Incident autoclose duration**. This field determines when Monitoring closes incidents in the absence of metric data.\n9. Optional: Click **Documentation**, and then add any information that you want included in a notification message.\n10. Click **Alert name** and enter a name for the alerting policy.\n11. Click **Create Policy**.\n\nWhat's next\n-----------\n\nTo configure a sink, see [Exporting traces](/trace/docs/trace-export-configure)."]]