Python 2.7 已終止支援,並將於 2026 年 1 月 31 日
淘汰。淘汰後,您將無法部署 Python 2.7 應用程式,即使貴機構先前曾使用機構政策重新啟用舊版執行階段的部署作業,也無法部署。現有的 Python 2.7 應用程式在
淘汰日期過後,仍會繼續執行並接收流量。建議您
改用系統支援的最新 Python 版本。
建立資料結構以達到嚴格的一致性
透過集合功能整理內容
你可以依據偏好儲存及分類內容。
附註:我們強烈建議建構新應用程式的開發人員使用 NDB 用戶端程式庫,因為 NDB 用戶端程式庫與本用戶端程式庫相較之下有幾個優點,例如能透過 Memcache API 自動將實體加入快取。如果您目前使用的是舊版的 DB 用戶端程式庫,請參閱從 DB 至 NDB 的遷移指南。
Datastore 能透過許多機器發布資料,還能針對地理範圍廣泛的區域使用同步備用資源,因此能發揮高度的可用性、擴充性以及耐用性。不過,這種設計有利有弊,缺點在於任何一個實體群組的寫入總處理量受限於大約每秒修訂一次。如要跨多個實體群組進行查詢或交易,也會受到限制。本頁詳細說明相關限制,同時以不犧牲應用程式寫入總處理量需求為前提,探討建立資料以利強式一致性的最佳做法。
同步一致性讀取功能一定能夠傳回最新的資料,此外,若是在交易當中執行,則會呈現來源是單一且一致的數據匯報。不過,查詢必須指定祖系篩選條件,才能保持十分一致或參與交易,且交易最多只能與 25 個實體群組相關。能發揮最終一致性的讀取功能沒有以上限制,適用於許多情況。若使用能發揮最終一致性的讀取功能,可以在為數更多的實體群組之間發佈資料,從而在不同的實體群組同時執行提交,以利提高寫入總處理量。但是,要判斷您的應用程式究竟是否適合使用能發揮最終一致性的讀取功能,就必須瞭解這類功能的特性:
- 這類讀取功能的結果不見得能反映出最新交易,這是因為這類讀取功能並不要求必須執行最新的備用資源,且會在執行查詢時使用備用資源中的任何一項可用資料。備用資源的延遲時間通常不超過幾秒鐘。
- 跨多個實體的單一修訂交易可能只適用於其中一部分實體,而不適用其他實體。然而請注意,絕對不會發生在單一實體中只適用部分交易的情形。
- 查詢結果可能會包含根據篩選條件不應包含的實體,也可能會排除應包含的實體。這是因為索引讀取的版本可能會與實體本身所讀取的版本不同所導致。
若要瞭解如何建構資料以實現強式一致性,請為簡單的留言板應用比較兩種不同的方法。第一種方法是為每個已建立的實體建立一個新的根實體。
import webapp2
from google.appengine.ext import db
class Guestbook(webapp2.RequestHandler):
def post(self):
greeting = Greeting()
...
接下來會查詢最近十句問候語的實體種類 Greeting
。
import webapp2
from google.appengine.ext import db
class MainPage(webapp2.RequestHandler):
def get(self):
self.response.out.write('<html><body>')
greetings = db.GqlQuery("SELECT * "
"FROM Greeting "
"ORDER BY date DESC LIMIT 10")
不過,由於您使用的是非祖系查詢,因此在執行查詢時,這項配置中用於執行查詢的備用資源可能尚未發現新的問候語。儘管如此,幾乎所有寫入均可在提交後幾秒內用於非祖系查詢。就許多應用程式而言,只要能在目前使用者自己所做的變更中提供非祖系查詢結果,即使出現備用資源延遲現象,通常也在可以接受的範圍。
如果您的應用程式十分需要具有同步一致性,您可以選擇採用替代方法,在必須透過具有同步一致性的單一祖系查詢來讀取的所有實體中,使用能識別相同根實體的祖系路徑來寫入實體:
import webapp2
from google.appengine.ext import db
class Guestbook(webapp2.RequestHandler):
def post(self):
guestbook_name=self.request.get('guestbook_name')
greeting = Greeting(parent=guestbook_key(guestbook_name))
...
接著您就能夠在透過共同根實體所識別的實體群組中,執行具有同步一致性的祖系查詢:
import webapp2
from google.appengine.ext import db
class MainPage(webapp2.RequestHandler):
def get(self):
self.response.out.write('<html><body>')
guestbook_name=self.request.get('guestbook_name')
greetings = db.GqlQuery("SELECT * "
"FROM Greeting "
"WHERE ANCESTOR IS :1 "
"ORDER BY date DESC LIMIT 10",
guestbook_key(guestbook_name))
這個方法能依留言板寫入單一實體群組,因此效能十分一致,不過,留言板變更也會因此受到限制,每秒最多只能寫入 1 次 (支援範圍內的實體群組限制)。如果應用程式可能會出現較頻繁的寫入使用量,您可能需要考慮採用其他方法:例如,您可以將最近的貼文放入具有到期日的memcache,並顯示 memcache 和 Datastore 中的近期貼文組合,或者您可以將貼文快取到 Cookie 中,將某些狀態放入網址,或完全使用其他方法。這麼做的目的在於找到一個快取解決方案,以利目前使用者在您的應用程式中貼文的期間內提供資料。請記住,若在交易中執行 get 操作、祖系查詢或任何操作,一定會顯示最近寫入的資料。
除非另有註明,否則本頁面中的內容是採用創用 CC 姓名標示 4.0 授權,程式碼範例則為阿帕契 2.0 授權。詳情請參閱《Google Developers 網站政策》。Java 是 Oracle 和/或其關聯企業的註冊商標。
上次更新時間:2025-09-04 (世界標準時間)。
[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-04 (世界標準時間)。"],[[["\u003cp\u003eDevelopers should utilize the NDB Client Library for new applications due to its advantages, such as automatic entity caching.\u003c/p\u003e\n"],["\u003cp\u003eDatastore offers high availability and scalability, but this comes with a limitation of about one commit per second for any single entity group.\u003c/p\u003e\n"],["\u003cp\u003eStrongly-consistent reads require an ancestor filter and provide current data within transactions, while eventually-consistent reads lack these constraints but might return outdated or incomplete data.\u003c/p\u003e\n"],["\u003cp\u003eFor applications requiring strong consistency, entities can be written with an ancestor path, but this method limits changes within a guestbook to one write per second.\u003c/p\u003e\n"],["\u003cp\u003eApplications encountering heavier write usage may consider caching strategies, such as using memcache or cookies, to manage recent posts and improve responsiveness for the current user.\u003c/p\u003e\n"]]],[],null,["# Structuring Data for Strong Consistency\n\n**Note:**\nDevelopers building new applications are **strongly encouraged** to use the\n[NDB Client Library](/appengine/docs/legacy/standard/python/ndb), which has several benefits\ncompared to this client library, such as automatic entity caching via the Memcache\nAPI. If you are currently using the older DB Client Library, read the\n[DB to NDB Migration Guide](/appengine/docs/legacy/standard/python/ndb/db_to_ndb)\n\nDatastore provides high availability, scalability and durability by\ndistributing data over many machines and using synchronous\nreplication over a wide geographic area. However, there is a tradeoff in this\ndesign, which is that the write throughput for any single\n[*entity group*](/appengine/docs/legacy/standard/python/datastore/entities#Ancestor_paths) is limited to about\none commit per second, and there are limitations on queries or transactions that\nspan multiple entity groups. This page describes these limitations in more\ndetail and discusses best practices for structuring your data to support strong\nconsistency while still meeting your application's write throughput\nrequirements.\n\nStrongly-consistent reads always return current data, and, if performed within a\ntransaction, will appear to come from a single, consistent snapshot. However,\nqueries must specify an ancestor filter in order to be strongly-consistent or\nparticipate in a transaction, and transactions can involve at most 25 entity\ngroups. Eventually-consistent reads do not have those limitations, and are\nadequate in many cases. Using eventually-consistent reads can allow you to\ndistribute your data among a larger number of entity groups, enabling you to\nobtain greater write throughput by executing commits in parallel on the\ndifferent entity groups. But, you need to understand the characteristics of\neventually-consistent reads in order to determine whether they are suitable for\nyour application:\n\n- The results from these reads might not reflect the latest transactions. This can occur because these reads do not ensure that the replica they are running on is up-to-date. Instead, they use whatever data is available on that replica at the time of query execution. Replication latency is almost always less than a few seconds.\n- A committed transaction that spanned multiple entities might appear to have been applied to some of the entities and not others. Note, though, that a transaction will never appear to have been partially applied within a single entity.\n- The query results can include entities that should not have been included according to the filter criteria, and might exclude entities that should have been included. This can occur because indexes might be read at a different version than the entity itself is read at.\n\nTo understand how to structure your data for strong consistency, compare two\ndifferent approaches for a simple guestbook application. The first approach\ncreates a new root entity for each entity that is created: \n\n import webapp2\n from google.appengine.ext import db\n\n class Guestbook(webapp2.RequestHandler):\n def post(self):\n greeting = Greeting()\n ...\n\nIt then queries on the entity kind `Greeting` for the ten most recent greetings. \n\n import webapp2\n from google.appengine.ext import db\n\n class MainPage(webapp2.RequestHandler):\n def get(self):\n self.response.out.write('\u003chtml\u003e\u003cbody\u003e')\n greetings = db.GqlQuery(\"SELECT * \"\n \"FROM Greeting \"\n \"ORDER BY date DESC LIMIT 10\")\n\nHowever, because you are using a non-ancestor query, the replica used to perform\nthe query in this scheme might not have seen the new greeting by the time the\nquery is executed. Nonetheless, nearly all writes will be available for\nnon-ancestor queries within a few seconds of commit. For many applications, a\nsolution that provides the results of a non-ancestor query in the context of the\ncurrent user's own changes will usually be sufficient to make such replication\nlatencies completely acceptable.\n\nIf strong consistency is important to your application, an alternate approach is\nto write entities with an ancestor path that identifies the same root entity\nacross all entities that must be read in a single, strongly-consistent ancestor\nquery: \n\n import webapp2\n from google.appengine.ext import db\n\n class Guestbook(webapp2.RequestHandler):\n def post(self):\n guestbook_name=self.request.get('guestbook_name')\n greeting = Greeting(parent=guestbook_key(guestbook_name))\n ...\n\nYou will then be able to perform a strongly-consistent ancestor query within the\nentity group identified by the common root entity: \n\n import webapp2\n from google.appengine.ext import db\n\n class MainPage(webapp2.RequestHandler):\n def get(self):\n self.response.out.write('\u003chtml\u003e\u003cbody\u003e')\n guestbook_name=self.request.get('guestbook_name')\n\n greetings = db.GqlQuery(\"SELECT * \"\n \"FROM Greeting \"\n \"WHERE ANCESTOR IS :1 \"\n \"ORDER BY date DESC LIMIT 10\",\n guestbook_key(guestbook_name))\n\nThis approach achieves strong consistency by writing to a single entity group\nper guestbook, but it also limits changes to the guestbook to no more than\n1 write per second (the supported limit for entity groups). If your application\nis likely to encounter heavier write usage, you might need to consider using\nother means: for example, you might put recent posts in a\n[memcache](/appengine/docs/legacy/standard/python/memcache) with an expiration\nand display a mix of recent posts from the memcache and\nDatastore, or you might cache them in a cookie, put some state\nin the URL, or something else entirely. The goal is to find a caching solution\nthat provides the data for the current user for the period of time in which the\nuser is posting to your application. Remember, if you do a get, an ancestor\nquery, or any operation within a transaction, you will always see the most\nrecently written data."]]