Spanner의 지연 시간 측정항목은 Spanner 서비스에서 요청을 처리하는 데 걸리는 시간을 측정합니다. 이 측정항목은 Spanner에서 사용하는 CPU 시간이 아닌 실제 경과 시간을 캡처합니다.
이러한 지연 시간 측정항목에는 애플리케이션 레이어 내 지연 시간 또는 네트워크 지연 시간과 같은 Spanner 외부에서 발생하는 지연 시간은 포함되지 않습니다. 다른 유형의 지연 시간을 측정하려면 Cloud Monitoring을 사용하여 커스텀 측정항목으로 애플리케이션을 계측하면 됩니다.
애플리케이션의 지연 시간이 예상보다 길지만 Spanner의 지연 시간 측정항목은 총 엔드 투 엔드 지연 시간보다 훨씬 짧다면 애플리케이션 코드에 문제가 있을 수 있습니다. 애플리케이션의 성능 문제로 인해 일부 코드 경로가 느려지는 경우 각 요청의 전체 엔드 투 엔드 지연 시간이 늘어날 수 있습니다.
이 문제를 확인하려면 애플리케이션을 벤치마킹하여 예상보다 느린 코드 경로를 식별하세요.
Spanner와 통신하는 코드를 주석 처리한 후 총 지연 시간을 다시 측정할 수도 있습니다. 총 지연 시간이 크게 변경되지 않는다면 Spanner가 긴 대기 시간의 원인이 아닐 가능성이 높습니다.
총 지연 시간이 길고 Spanner 지연 시간도 긴 경우
애플리케이션의 지연 시간이 예상보다 길고 Spanner 지연 시간 측정항목도 길다면 몇 가지 원인이 있을 수 있습니다.
인스턴스에 더 많은 컴퓨팅 용량이 필요합니다. 인스턴스에 CPU 리소스가 부족하고 CPU 사용률이 권장 최댓값을 초과할 경우 Spanner에서 요청을 신속하게 효율적으로 처리하지 못할 수 있습니다.
일부 쿼리로 인해 CPU 사용률이 높아지는 경우. 쿼리가 쿼리 매개변수 및 보조 색인와 같은 효율성을 향상시키는 Spanner 기능을 사용하지 않거나 조인 또는 다른 CPU를 많이 사용하는 작업을 다수 포함하고 있으면 인스턴스의 CPU 리소스 중 상당 부분을 사용할 수 있습니다.
이러한 문제를 확인하려면 Cloud Monitoring 콘솔을 사용하여 높은 CPU 사용률과 긴 지연 시간의 상관관계를 확인하세요. 또한 인스턴스의 쿼리 통계를 확인하여 동일한 기간에 CPU 집약적인 쿼리가 있는지 파악하세요.
CPU 사용률과 지연 시간이 동시에 높다고 판단되면 문제를 해결하기 위한 조치를 취하세요.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-09-03(UTC)"],[],[],null,["# Use metrics to diagnose latency\n\nThis page describes the latency metrics that Spanner provides. If your\napplication experiences high latency, use these metrics to help you diagnose and\nresolve the issue.\n\nYou can view these metrics [in the Google Cloud console](/spanner/docs/monitoring-console) and [in\nthe Cloud Monitoring console](/spanner/docs/monitoring-cloud).\n\nOverview of latency metrics\n---------------------------\n\nThe latency metrics for Spanner measure how long it takes for the\nSpanner service to process a request. The metric captures the actual\namount of time that elapses, not the amount of CPU time that Spanner\nuses.\n\nThese latency metrics do not include latency that occurs outside of\nSpanner, such as network latency or latency within your application\nlayer. To measure other types of latency, you can use Cloud Monitoring to\n[instrument your application with custom metrics](/monitoring/custom-metrics).\n\nYou can view charts of latency metrics [in the\nGoogle Cloud console](/spanner/docs/monitoring-console#view-history) and [in the\nCloud Monitoring console](/spanner/docs/monitoring-cloud). You can view combined\nlatency metrics that include both reads and writes, or you can view separate\nmetrics for reads and writes.\n\nBased on the latency of each request, Spanner groups the requests into\npercentiles. You can view latency metrics for 50th percentile and 99th\npercentile latency:\n\n- **50th percentile latency**: The maximum latency, in seconds, for the fastest\n 50% of all requests. For example, if the 50th percentile latency is 0.5 seconds,\n then Spanner processed 50% of requests in less than 0.5 seconds.\n\n This metric is sometimes called the *median latency*.\n- **99th percentile latency**: The maximum latency, in seconds, for the fastest\n 99% of requests. For example, if the 99th percentile latency is 2 seconds, then\n Spanner processed 99% of requests in less than 2 seconds.\n\n### Latency and operations per second\n\nWhen an instance processes a small number of requests during a period of time,\nthe 50th and 99th percentile latencies during that time are not meaningful\nindicators of the instance's overall performance. Under these conditions, a very\nsmall number of outliers can drastically change the latency metrics.\n\nFor example, suppose that an instance processes 100 requests during an hour. In\nthis case, the 99th percentile latency for the instance during that hour is the\namount of time it took to process the slowest request. A latency measurement\nbased on a single request is not meaningful.\n\nHow to diagnose latency issues\n------------------------------\n\nThe following sections describe how to diagnose several common issues that could\ncause your application to experience high end-to-end latency.\n\nFor a quick look at an instance's latency metrics, [use the\nGoogle Cloud console](/spanner/docs/monitoring-console). To examine the metrics more closely\nand [find correlations](/spanner/docs/monitoring-cloud#create-charts) between latency and other\nmetrics, [use the Cloud Monitoring console](/spanner/docs/monitoring-cloud).\n\n### High total latency, low Spanner latency\n\nIf your application experiences latency that is higher than expected, but the\nlatency metrics for Spanner are significantly lower than the total\nend-to-end latency, there might be an issue in your application code. If your\napplication has a performance issue that causes some code paths to be slow, the\ntotal end-to-end latency for each request might increase.\n\nTo check for this issue, benchmark your application to identify code paths that\nare slower than expected.\n\nYou can also comment out the code that communicates with Spanner, then\nmeasure the total latency again. If the total latency doesn't change very much,\nthen Spanner is unlikely to be the cause of the high latency.\n\n### High total latency, high Spanner latency\n\nIf your application experiences latency that is higher than expected, and the\nSpanner latency metrics are also high, there are a few likely causes:\n\n- **Your instance needs more compute capacity.** If your instance does not have enough\n CPU resources, and its CPU utilization exceeds the [recommended\n maximum](/spanner/docs/cpu-utilization#recommended-max), then Spanner might not be able to process\n your requests quickly and efficiently.\n\n- **Some of your queries cause high CPU utilization.** If your queries do not\n take advantage of Spanner features that improve efficiency, such as\n [query parameters](/spanner/docs/reference/standard-sql/lexical#query_parameters) and [secondary indexes](/spanner/docs/secondary-indexes),\n or if they include a large number of [joins](/spanner/docs/reference/standard-sql/query-syntax#join_types) or other CPU-intensive\n operations, the queries can use a large portion of the CPU resources for your\n instance.\n\nTo check for these issues, use the Cloud Monitoring console to [look\nfor a correlation](/spanner/docs/monitoring-cloud#create-charts) between high CPU utilization and\nhigh latency. Also, check the [query statistics](/spanner/docs/introspection/query-statistics) for your\ninstance to identify any CPU-intensive queries during the same time period.\n\nIf you find that CPU utilization and latency are both high at the same time,\ntake action to address the issue:\n\n- If you did not find many CPU-intensive queries, [add compute capacity to the\n instance](/spanner/docs/create-manage-instances#change-compute-capacity).\n\n Adding [compute capacity](/spanner/docs/compute-capacity) provides more CPU resources and enables\n Spanner to handle a larger workload.\n- If you found CPU-intensive queries, review the [query execution\n plans](/spanner/docs/query-execution-plans) to learn why the queries are slow, then update\n your queries to follow the [SQL best practices for\n Spanner](/spanner/docs/sql-best-practices).\n\n You might also need to review the [schema design](/spanner/docs/schema-design) for the\n database and update the schema to allow for more efficient queries.\n\nWhat's next\n-----------\n\n- Monitor your instance with the [Google Cloud console](/spanner/docs/monitoring-console) or the [Cloud Monitoring console](/spanner/docs/monitoring-cloud).\n- Learn how to [find correlations between high latency and other\n metrics](/spanner/docs/monitoring-cloud#create-charts).\n- Understand how to reduce read latency by following [SQL best\n practices](/spanner/docs/sql-best-practices) and using [timestamp bounds](/spanner/docs/timestamp-bounds).\n- Find out about [latency metrics in query statistics\n tables](/spanner/docs/introspection/query-statistics), which you can retrieve using SQL statements.\n- Understand [how instance configuration affects latency](/spanner/docs/instances)."]]