[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-05 (世界標準時間)。"],[],[],null,["# Time to live (TTL) overview\n\nTime to live (TTL) lets you set policies to periodically delete data from\nSpanner tables. Removing unneeded data:\n\n- Decreases storage and backup costs.\n- Reduces the number of rows the database has to scan for some queries, potentially increasing query performance.\n- Helps to adhere to regulations or industry guidelines that limit the retention time on certain types of data.\n\nTTL is ideal for regular clean-up activities. It runs continuously in the\nbackground, periodically deleting eligible data in batches. Data is typically\ndeleted within 72 hours after its expiration date. Each delete requires a\nprimary key replication across the database's replicas that leads to replication\ncosts.\n\nFor more information, see\n[Data replication pricing](/spanner/pricing#data-replication).\n\nTTL doesn't immediately invalidate data or hide it from queries when it becomes\neligible for deletion. TTL also doesn't check data while it is inserted, thus it\ndoesn't block you from inserting a row with an expired timestamp.\n\nTTL is designed to minimize the impact on other database workloads.\nThe TTL sweeper process works in the background at\n[system low priority](/spanner/docs/cpu-utilization#task-priority). It\nspreads work over time and available instance resources more efficiently than\nend-user queries and includes retry logic to ensure end-to-end cleanup with\nminimal processing overhead.\n\nAnother background compaction process reclaims storage from deleted rows,\ntypically within seven days.\n\nHow does TTL work?\n------------------\n\nYou can set TTL on Spanner tables by defining a row deletion\npolicy in the database schema. This policy allows Spanner to\nperiodically delete unneeded data. TTL policies have the following\ncharacteristics:\n\n- Each table can have its own policy.\n- Only one TTL policy can be specified per table.\n- You set up TTL differently for GoogleSQL-dialect databases and PostgreSQL-dialect databases.\n- The TTL policy doesn't delete rows that have the timestamp set to `NULL`.\n- Data inserted with expired timestamps is cleaned up when detected in the next TTL deletion cycle.\n\n### TTL with GoogleSQL\n\nUsing GoogleSQL, you define a row deletion policy by specifying\na *timestamp* and an *interval* to determine when a row is eligible for\ndeletion; for example, last update date plus 30 days.\n\nA background system process checks for eligible rows daily. It parallelizes the\nactual deletes in batches that are executed close to where the data is stored\ninternally. Each batch runs in its own transaction at a consistent timestamp.\nThus the rows in a given batch, along with any indexes and interleaved children,\nare deleted atomically. However, deletes across batches\nhappens in different transactions.\n\nSince this is an asynchronous background process, there is a delay between\neligibility and deletion. Typically, the delay is less than 72 hours. As a\nresult, rows might remain in your table for up to three days after their TTL has\nexpired; for example, a table with a row deletion policy that deletes rows older\nthan four days might include rows up to seven days old as well as older,\nundeletable rows.\n\nFor step-by-step instructions on how to create a GoogleSQL row\ndeletion policy, see\n[Create a TTL policy](/spanner/docs/ttl/working-with-ttl#create).\n\n### TTL with PostgreSQL\n\nUsing PostgreSQL, a database owner can use a `TTL INTERVAL` clause in\nthe `CREATE TABLE` or `ALTER TABLE` statement to define a row deletion policy.\n\nTo set a row deletion policy on a PostgreSQL table, the table must\nhave a column with the data type `TIMESTAMPTZ`. The `TTL INTERVAL` clause uses\nthis column to set an interval specification for when a row is eligible for\ndeletion.\n\nThe clause must evaluate to a whole number of days. For example, `'3\nDAYS'` is allowed, and so is `'4 DAYS 2 MINUTES - 2 MINUTES'`, but `'4 DAYS 3\nMINUTES'` is not allowed, and an error is returned. You cannot use negative\nnumbers.\n\nTTL garbage collection deletes eligible rows continuously and in the background.\nBecause this is an asynchronous background process, there is a delay between\neligibility and deletion. The table might contain rows that is eligible for TTL\ndeletion but for which TTL has not completed, yet. Typically, the delay is less\nthan 72 hours.\n\nFor instructions on how to create a PostgreSQL row deletion policy,\nsee [Create a TTL policy](/spanner/docs/ttl/working-with-ttl#create).\n\nBackups and TTL\n---------------\n\n### Restore a backup\n\nWhen you restore a database from a backup, any row deletion policies that were\nconfigured on the source database are automatically dropped. This prevents\nSpanner from potentially deleting expired data as soon as the\nbackup has been restored. Hence, you'll need to reconfigure the TTL manually.\n\n### Data consistency\n\nA [backup](/spanner/docs/backup) is a consistent snapshot of your data at a\nparticular point in time (`version_time`). The backup can contain rows that\nmight be eligible for TTL deletion but for which TTL has not yet completed.\nSimilarly, Dataflow export jobs read the entire table at a\nfixed timestamp.\n\n### Auditing\n\nTTL supports auditing its deletions through\n[change streams](/spanner/docs/change-streams/details).\nChange streams data records that track TTL changes to a database have the\n`transaction_tag` field set to `RowDeletionPolicy` and the\n`is_system_transaction` field set to `true`. Change streams readers are then\nable to filter out all the TTL records, or all the records except for the TTL\nones, depending on their use case. See an example of [using Beam to filter by\ntransaction tags](/spanner/docs/change-streams/use-dataflow#filter-by-tx-tag).\n\nWhat's next\n-----------\n\n- Learn [how to manage data retention with TTL](/spanner/docs/ttl/working-with-ttl).\n- Learn about [TTL metrics and monitoring](/spanner/docs/ttl/monitoring-and-metrics)."]]