(...timestamp….) db=postgres, user=customer FATAL: terminating connection due to administrator command
次の Cloud SQL for PostgreSQL データベース ログが表示されます。これは、OOM を防ぐために終了したメモリ使用量が高いクエリをキャプチャしたものです。このクエリは、元のクエリの正規化されたバージョンです。
db=postgres,user=customer LOG: postgres process with PID 1734 for the query "SELECT COUNT(product) AS item_count FROM test_table WHERE product_type = $1 AND product LIKE $2 AND c6_2 IN ($3,$4,$5,$6,$7)" has been cancelled.
次のイベントが発生した場合、[Cloud SQL インスタンス] ページにも通知が表示されます。
過去 24 時間にわたるインスタンスのメモリ使用率。
過去 24 時間以内にキャンセルされた正規化されたクエリのリスト。
メモリ使用量の最適化に関する Google ドキュメントへのリンク。
高いメモリ使用量 - 推奨事項
メモリ関連の一般的な問題への推奨対処方法は次のとおりです。インスタンスが大量のメモリを使い続けると、最終的に out of memory の問題が発生する可能性が高くなります。PostgreSQL または他のプロセスでのメモリ需要によりシステムのメモリが不足すると、PostgreSQL ログではカーネル メッセージの Out of Memory が確認され、PostgreSQL インスタンスは最終的に停止します。次に例を示します。
Out of Memory: Killed process 12345 (postgres)
OOM の問題が最もよく発生するのは、work_mem の値が高く、アクティブな接続数が多いインスタンスです。したがって、Cloud SQL for PostgreSQL インスタンスで、頻繁に OOM が発生する場合や、OOM を避ける場合は、次の推奨事項を検討してください。
SELECT
sum(heap_blks_read) as heap_read,
sum(heap_blks_hit) as heap_hit,
sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) as ratio
FROM
pg_statio_user_tables;
SELECT
sum(idx_blks_read) as idx_read,
sum(idx_blks_hit) as idx_hit,
(sum(idx_blks_hit) - sum(idx_blks_read)) / sum(idx_blks_hit) as ratio
FROM
pg_statio_user_indexes;
一般的には、95~99% のキャッシュ ヒット率が適切な値と見なされます。
Cloud SQL for PostgreSQL では、メモリ管理の改善を目的として、huge_pages フラグがデフォルトで有効になっています。huge_pages の詳細については、PostreSQL のドキュメントをご覧ください。
[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["わかりにくい","hardToUnderstand","thumb-down"],["情報またはサンプルコードが不正確","incorrectInformationOrSampleCode","thumb-down"],["必要な情報 / サンプルがない","missingTheInformationSamplesINeed","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-08-08 UTC。"],[],[],null,["# Optimize high memory usage in instances\n\n\u003cbr /\u003e\n\n[MySQL](/sql/docs/mysql/optimize-high-memory-usage \"View this page for the MySQL database engine\") \\| PostgreSQL \\| [SQL Server](/sql/docs/sqlserver/optimize-high-memory-usage \"View this page for the SQL Server database engine\")\n\n\u003cbr /\u003e\n\nThis document covers how to identify high memory usage for Cloud SQL instances and provides recommendations on how to solve memory-related issues.\n\nTo learn how to configure memory usage for a Cloud SQL instance, see [Best practices for managing memory usage](/sql/docs/postgres/manage-memory-usage-best-practices).\n\nIdentify high memory usage\n--------------------------\n\nThe following sections discuss high memory usage scenarios.\n\n### Use Metrics Explorer to identify the memory usage\n\nYou can review memory usage of the instance with the\n[`database/memory/components.usage`](/sql/docs/postgres/admin-api/metrics) metric in\n[Metrics Explorer](/monitoring/charts/metrics-explorer).\n| **Note:** If you have less than 10% memory in `database/memory/components.cache` and `database/memory/components.free` combined, the risk of an OOM event is high. To monitor the memory usage and to prevent OOM events, we recommend that you set up an [alerting policy](/monitoring/alerts) with a metric threshold condition of 90% in `database/memory/components.usage`.\n\n### Use Query insights to analyse explain plan for queries that are consuming high resources\n\n[Query insights](/sql/docs/postgres/using-query-insights) helps you detect, diagnose, and prevent query performance problems for Cloud SQL databases.\nQuery insights gives you a list of long running queries along with their [explain plan (PostgreSQL documentation)](https://www.postgresql.org/docs/current/sql-explain.html).\nReview the explain plan and identify the part of the query that has a high memory usage scan method.\nRegardless of the query run time, query insights gives you the explain plan for all queries. Identify the complex queries that are taking more time so that you know which queries are blocking the memory for longer durations.\n\nCommon PostgreSQL scan methods that use high memory include the following:\n\n- Bitmap heap scan\n- Quick sort\n- Hash join or Hash\n\nHigh memory usage and relevant logs for Gemini-enabled instances\n----------------------------------------------------------------\n\n|\n| **Preview\n| --- [Gemini in Databases](/gemini/docs/overview)**\n|\n|\n| This feature is subject to the \"Pre-GA Offerings Terms\" in the General Service Terms section\n| of the [Service Specific Terms](/terms/service-terms#1).\n|\n| You can process personal data for this feature as outlined in the\n| [Cloud Data Processing\n| Addendum](/terms/data-processing-addendum), subject to the obligations and restrictions described in the agreement under\n| which you access Google Cloud.\n|\n| Pre-GA features are available \"as is\" and might have limited support.\n|\n| For more information, see the\n| [launch stage descriptions](/products#product-launch-stages).\n\nIf you have [Gemini](/gemini/docs/overview) enabled, instead of an out-of-memory (OOM) failure which leads to database downtime, a connection executing a query with high memory usage is terminated thereby preventing database downtime. To identify the defaulting query, you can check the database logs for the following entries: \n\n (...timestamp....) db=postgres, user=customer FATAL: terminating connection due to administrator command\n\nThe following Cloud SQL for PostgreSQL database log is displayed which captures the high memory usage query that was terminated to prevent OOM. The query is a normalized version of the original query: \n\n db=postgres,user=customer LOG: postgres process with PID 1734 for the query \"SELECT COUNT(product) AS item_count FROM test_table WHERE product_type = $1 AND product LIKE $2 AND c6_2 IN ($3,$4,$5,$6,$7)\" has been cancelled.\n\nNotifications are also displayed on the **Cloud SQL Instances** page for the following events:\n\n- Memory utilization of the instance over the last 24 hours.\n- List of normalized queries that have been canceled within the past 24 hours.\n- A link to Google documentation about optimizing memory usage.\n\nHigh Memory Usage - Recommendations\n-----------------------------------\n\nThe following recommendations address the common memory-related problems.\nIf the instance continues to use a high amount of memory, chances are high that it eventually gets an `out of memory` issue.\nIf the memory demands of either PostgreSQL or another process cause the system to run out of memory, you see an `Out of Memory` kernel message in PostgreSQL logs and the PostgreSQL instance is eventually stopped.\nFor example: \n\n Out of Memory: Killed process 12345 (postgres)\n\nThe most common instance where you see an OOM issue is with a higher value of `work_mem` with a high number of active connections.\nTherefore, if you are getting frequent OOMs or to avoid OOMs in your Cloud SQL for PostgreSQL instance, you should consider following these recommendations:\n\n- Set `work_mem`\n\n Queries that use quick sort are faster than the ones using external merge sort. However, the former may lead to memory exhaustion.\n To resolve this issue, set the `work_mem` value reasonable enough such that it balances both, the sort operations happening in the memory and disk.\n You can also consider setting `work_mem` on a session level rather than setting it for an entire instance.\n- Monitor the active sessions\n\n Each connection uses a certain amount of memory. Use the following query to check the active connections count: \n\n SELECT\n state,\n usename,\n count(1)\n FROM\n pg_stat_activity\n WHERE\n pid \u003c\u003e pg_backend_pid()\n GROUP BY\n state,\n usename\n ORDER BY\n 1;\n\n If you have a large number of active sessions, analyze the root cause for a high number of active sessions; for example, transaction locks.\n- Set `shared_buffers`\n\n If `shared_buffers` is set to a higher value, consider decreasing the `shared_buffers` value so that the memory can be used for other operations, such as `work_mem`, or for establishing new connections.\n\n **Cache hit ratio**\n\n PostgreSQL generally tries to keep the data you access most often in the cache. When the data is requested by a client,\n if it's already cached in shared buffers, it's directly given to the client. This is called a *cache hit* .\n If the data is not present in shared buffers, the data is first fetched to shared buffers from a disk and then given to the client.\n This is called a *cache miss*.\n Cache hit ratio measures how many content requests the cache has handled compared to the requests received.\n Run the following query to check the cache hit ratio for the table requests in the PostgreSQL instance: \n\n SELECT\n sum(heap_blks_read) as heap_read,\n sum(heap_blks_hit) as heap_hit,\n sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) as ratio\n FROM\n pg_statio_user_tables;\n\n Run the following query to check the cache hit ratio for the index requests in the PostgreSQL instance: \n\n SELECT\n sum(idx_blks_read) as idx_read,\n sum(idx_blks_hit) as idx_hit,\n (sum(idx_blks_hit) - sum(idx_blks_read)) / sum(idx_blks_hit) as ratio\n FROM\n pg_statio_user_indexes;\n\n Generally, 95 to 99% of cache hit ratio is considered to be a good value.\n- In Cloud SQL for PostgreSQL, the `huge_pages` flag is enabled by default for better\n memory management. To learn more about `huge_pages`, see\n [PostreSQL documentation](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-HUGE-PAGES).\n\n- Set `max_locks_per_transaction`\n\n The `max_locks_per_transaction` value indicates the number of database objects that can be locked simultaneously.\n In most cases, the default value of 64 is sufficient. However, if you are dealing with a large dataset, you may end up with OOMs. Consider increasing the value\n of `max_locks_per_transaction` high enough to avoid OOMs.\n | **Note:** The object-level locking happens in memory on an ongoing transaction, Increasing the `max_prepared_transactions` value unreasonably high can cause the database instance to request more shared memory.\n\n The `max_locks_per_transaction` value should be `max_locks_per_transaction` \\* (`max_connections` + `max_prepared_transactions`) objects.\n This means that if you have 300 thousand objects, and if the value of `max_connections` is 200, then `max_locks_per_transaction` should be 1500.\n- Set `max_pred_locks_per_transaction`\n\n The transaction might fail if you have clients that touch many different tables in a single serializable transaction.\n In that scenario, consider increasing `max_pred_locks_per_transaction` to a reasonably high value.\n Like `max_locks_per_transaction`, `max_pred_locks_per_transaction` also uses shared memory so don't set an unreasonable high value.\n- If the memory usage is still high and you feel those queries are legitimate traffic, then consider increasing the number of memory resources in your instance to avoid database crash or downtime.\n\nWhat's next\n-----------\n\n- [Google Cloud recommenders](/recommender/docs/recommenders)"]]