Google 是第一間發表 AI/機器學習隱私權承諾的公司。該文提到我們的信念:除了極致的安全性之外,客戶也應該對儲存在雲端的自家資料保有最大的掌控權。這項承諾也適用於 Google Cloud's 生成式 AI 產品。Google 透過完善的資料治理做法,確保團隊遵守這些承諾,包括審查 Google Cloud 用於產品開發的資料。如要進一步瞭解 Google 如何處理資料,請參閱 Google 的《Cloud 資料處理修訂條款》(CDPA)。
訓練限制
如《服務專屬條款》「服務條款」一節的第 17 條「訓練限制」所述,未經您事先許可或指示,Google 不會使用您的資料訓練或微調任何 AI/機器學習模型。這項規定適用於 Vertex AI 上的所有受管理模型,包括正式版和搶先體驗版模型。
保留顧客資料和實現零資料保留
在下列情況和條件下,Google 模型會在 Vertex AI 中保留客戶資料一段時間。如要達成零資料保留目標,客戶必須在下列各個領域採取特定行動:
Google 模型資料快取:根據預設,Google 基礎模型會快取 Gemini 模型的輸入內容。這麼做是為了減少延遲,並加快回覆顧客後續提示的速度。快取內容最多會在處理要求的資料中心儲存 24 小時。資料快取功能是在 Google Cloud 專案
層級啟用或停用,且系統會對快取資料強制執行專案層級的隱私權設定。專案的快取設定會套用至所有區域。 Google Cloud 如要達到零資料保留,請務必停用資料快取。請參閱「啟用及停用資料快取」。
利用 Google 搜尋建立基準:如服務專屬條款第 19 節「生成式 AI 服務:利用 Google 搜尋建立基準」所述,Google 會儲存顧客提供的提示和背景資訊,以及生成的輸出內容 30 天,用於建立基準結果和搜尋建議。此外,儲存的資訊也可能用於支援利用 Google 搜尋建立基準的系統偵錯和測試。使用「以 Google 搜尋建立基準」功能時,無法停用這項資訊的儲存作業。
Gemini Live API 工作階段續傳:這項功能預設為停用。使用者每次呼叫 API 時,都必須在 API 要求中指定這個欄位,才能啟用這項功能,且系統會對快取資料強制執行專案層級的隱私權設定。啟用「繼續對話」功能後,系統會將文字、影片和音訊提示資料,以及模型輸出內容等快取資料儲存最多 24 小時,使用者即可在 24 小時內重新連線至先前的對話。如要將資料保留時間設為零,請勿啟用這項功能。如要進一步瞭解這項功能 (包括如何啟用),請參閱
Live API。
這項限制適用於 Vertex AI 上的所有代管模型,包括正式發布版和搶先體驗版模型。
啟用及停用資料快取
您可以使用下列 curl 指令取得快取狀態、停用快取或重新啟用快取。停用或重新啟用快取時,變更會套用至所有 Google Cloud 區域。如要進一步瞭解如何使用 Identity and Access Management 授予啟用或停用快取所需的權限,請參閱「運用 IAM 執行 Vertex AI 存取控管」。請展開下列各節,瞭解如何取得目前的快取設定、停用快取功能,以及啟用快取功能。
[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-04 (世界標準時間)。"],[],[],null,["# Generative AI and zero data retention\n\nGoogle was the first in the industry to publish an\n[AI/ML Privacy Commitment](https://cloud.google.com/blog/products/ai-machine-learning/google-cloud-unveils-ai-and-ml-privacy-commitment),\nwhich outlines our belief that customers should have the highest level of\nsecurity and control over their data that is stored in the cloud. That commitment\nextends to Google Cloud's generative AI products. Google ensures that its\nteams are following these commitments through robust data governance practices,\nwhich include reviews of the data that Google Cloud uses in the development of\nits products. More details about how Google processes data can also be found in\nGoogle's [Cloud Data Processing Addendum (CDPA)](https://cloud.google.com/terms/data-processing-addendum).\n\nTraining restriction\n--------------------\n\nAs outlined in Section 17 \"Training Restriction\" in the Service Terms section of\n[Service Specific Terms](https://cloud.google.com/terms/service-terms),\nGoogle won't use your data to train or fine-tune any AI/ML models without your\nprior permission or instruction. This applies to all managed models on\nVertex AI, including GA and pre-GA models.\n\nCustomer data retention and achieving zero data retention\n---------------------------------------------------------\n\nCustomer data is retained in Vertex AI for Google models for limited\nperiods of time in the following scenarios and conditions. To achieve zero data retention, customers must take specific actions within each of these areas:\n\n- **Data caching for Google models** : By default, Google foundation models cache inputs for Gemini models. This is done to reduce latency and accelerate responses to subsequent prompts from the customer. Cached contents are stored for up to 24 hours in the data center where the request was served. Data caching is enabled or disabled at the Google Cloud project level, and project-level privacy is enforced for cached data. The same cache settings for a Google Cloud project apply to all regions. To achieve zero data retention, you must disable data caching. See [Enabling and disabling data caching](#enabling-disabling-caching).\n- **Prompt logging for abuse monitoring for Google models** : As outlined in Section 4.3 \"Generative AI Safety and Abuse\" of [Google Cloud Platform Terms of Service](https://cloud.google.com/terms), Google may log prompts to detect potential abuse and violations of its [Acceptable Use Policy](https://cloud.google.com/terms/aup) and [Prohibited Use Policy](https://policies.google.com/terms/generative-ai/use-policy) as part of providing generative AI services to customers. Only customers whose use of Google Cloud is governed by the [Google Cloud Platform Terms of Service](https://cloud.google.com/terms) and who don't have an [Invoiced Cloud Billing account](/billing/docs/concepts#billing_account_types) are subject to prompt logging for abuse monitoring. If you are in scope for prompt logging for abuse monitoring and want zero data retention, you can request an exception for abuse monitoring. See [Abuse monitoring](/vertex-ai/generative-ai/docs/learn/abuse-monitoring).\n- **Grounding with Google Search** : As outlined in Section 19 \"Generative AI Services: Grounding with Google Search\" of the [Service Specific Terms](https://cloud.google.com/terms/service-terms), Google stores prompts and contextual information that customers may provide, and generated output for thirty (30) days for the purposes of creating grounded results and search suggestions, and this stored information may be used for debugging and testing of systems that support grounding with Google Search. There is no way to disable the storage of this information if you use Grounding with Google Search.\n- **Session resumption for Gemini Live API:** This feature is disabled by default. It must be enabled by the user every time they call the API by specifying the field in the API request, and project-level privacy is enforced for cached data. Enabling Session Resumption allows the user to reconnect to a previous session within 24 hours by storing cached data, including text, video, and audio prompt data and model outputs, for up to 24 hours. To achieve zero data retention, do not enable this feature. For more information about this feature, including how to enable it, see [Live API](/vertex-ai/generative-ai/docs/live-api#session-resumption).\n\nThis applies to all managed models on Vertex AI, including GA and\npre-GA models.\n\n### Enabling and disabling data caching\n\nYou can use the following curl commands to get\ncaching status, disable caching, or re-enable caching.\nWhen you disable or re-enable caching, the change\napplies to all Google Cloud regions. For more information about using\nIdentity and Access Management to grant permissions required to enable or disable caching, see\n[Vertex AI access control with IAM](/vertex-ai/docs/general/access-control).\nExpand the following sections to learn how to get the current cache setting, to\ndisable caching, and to enable caching. \n\n#### Get current caching setting\n\nRun the following command to determine if caching is enabled or disabled for a\nproject. To run this command, a user must be granted one of the following\nroles: `roles/aiplatform.viewer`, `roles/aiplatform.user`, or\n`roles/aiplatform.admin`. \n\n```\nPROJECT_ID=PROJECT_ID\n# Setup project_id\n$ gcloud config set project PROJECT_ID\n\n# GetCacheConfig\n$ curl -X GET -H \"Authorization: Bearer $(gcloud auth application-default print-access-token)\" -H \"Content-Type: application/json\" https://us-central1-aiplatform.googleapis.com/v1/projects/PROJECT_ID/cacheConfig\n\n# Response if caching is enabled (caching is enabled by default).\n{\n \"name\": \"projects/PROJECT_ID/cacheConfig\"\n}\n\n# Response if caching is disabled.\n{\n \"name\": \"projects/PROJECT_ID/cacheConfig\"\n \"disableCache\": true\n}\n \n``` \n\n#### Disable caching\n\nRun the following curl command to disable caching for a Google Cloud project. To run\nthis command, a user must be granted the Vertex AI administrator role,\n`roles/aiplatform.admin`. \n\n```\nPROJECT_ID=PROJECT_ID\n# Setup project_id\n$ gcloud config set project PROJECT_ID\n\n# Setup project_id.\n$ gcloud config set project ${PROJECT_ID}\n\n# Opt-out of caching.\n$ curl -X PATCH -H \"Authorization: Bearer $(gcloud auth application-default print-access-token)\" -H \"Content-Type: application/json\" https://us-central1-aiplatform.googleapis.com/v1/projects/PROJECT_ID/cacheConfig -d '{\n \"name\": \"projects/PROJECT_ID/cacheConfig\",\n \"disableCache\": true\n}'\n\n# Response.\n{\n \"name\": \"projects/PROJECT_ID/locations/us-central1/projects/PROJECT_ID/cacheConfig/operations/${OPERATION_ID}\",\n \"done\": true,\n \"response\": {\n \"@type\": \"type.googleapis.com/google.protobuf.Empty\"\n }\n}\n \n``` \n\n#### Enable caching\n\nIf you disabled caching for a Google Cloud project and want re-enable it, run the\nfollowing curl command. To run this command, a user must be granted the\nVertex AI administrator role, `roles/aiplatform.admin`. \n\n```\nPROJECT_ID=PROJECT_ID\nLOCATION_ID=\"us-central1\"\n# Setup project_id\n$ gcloud config set project PROJECT_ID\n\n# Setup project_id.\n$ gcloud config set project ${PROJECT_ID}\n\n# Opt in to caching.\n$ curl -X PATCH -H \"Authorization: Bearer $(gcloud auth application-default print-access-token)\" -H \"Content-Type: application/json\" https://us-central1-aiplatform.googleapis.com/v1/projects/PROJECT_ID/cacheConfig -d '{\n \"name\": \"projects/PROJECT_ID/cacheConfig\",\n \"disableCache\": false\n}'\n\n# Response.\n{\n \"name\": \"projects/PROJECT_ID/locations/us-central1/projects/PROJECT_ID/cacheConfig/operations/${OPERATION_NUMBER}\",\n \"done\": true,\n \"response\": {\n \"@type\": \"type.googleapis.com/google.protobuf.Empty\"\n }\n}\n \n```\n\nWhat's next\n-----------\n\n- Learn about [responsible AI best practices and Vertex AI's safety filters](/vertex-ai/generative-ai/docs/learn/responsible-ai).\n- Learn about [Gemini in Google Cloud data governance](/gemini/docs/discover/data-governance)."]]