Vertex AI 特徵儲存庫 (舊版) 提供集中式存放區,可供整理、儲存及提供機器學習特徵。組織可透過中央特徵儲存庫,大規模分享、探索及重複使用機器學習功能,進而加快開發及部署新機器學習應用程式的速度。
Vertex AI 特徵儲存庫 (舊版) 是全代管解決方案,可管理及擴充儲存空間和運算資源等基礎架構。這項解決方案可讓資料科學家專注於特徵運算邏輯,不必擔心將特徵部署到正式環境時會遇到哪些挑戰。
Vertex AI 特徵儲存庫 (舊版) 是 Vertex AI 的整合部分,您可以單獨使用 Vertex AI 特徵儲存庫 (舊版),也可以將其納入 Vertex AI 工作流程。舉例來說,您可以從 Vertex AI 特徵儲存庫 (舊版) 擷取資料,在 Vertex AI 中訓練自訂或 AutoML 模型。
Vertex AI 特徵儲存庫 (舊版) 是 Vertex AI 特徵儲存庫的前身。如要進一步瞭解 Vertex AI 特徵儲存庫,請參閱 Vertex AI 特徵儲存庫說明文件。
總覽
使用 Vertex AI 特徵儲存庫 (舊版) 建立及管理特徵儲存庫、實體類型和特徵。特徵儲存庫是特徵及其值的頂層容器。設定特徵儲存庫後,獲准的使用者即可新增及分享特徵,不必尋求額外的工程支援。使用者可以定義特徵,然後從各種資料來源匯入 (擷取) 特徵值。進一步瞭解 Vertex AI 特徵儲存庫 (舊版) 資料模型和資源。
Vertex AI 特徵儲存庫 (舊版) 提供搜尋和篩選功能,方便其他人發掘及重複使用現有特徵。您可以查看各項功能的相關中繼資料,判斷功能的品質和使用模式。舉例來說,您可以查看特徵的有效值實體比例 (也稱為「特徵涵蓋範圍」),以及特徵值的統計分布。
代管解決方案,可大規模放送線上廣告
Vertex AI 特徵儲存庫 (舊版) 提供代管解決方案,可線上提供特徵 (低延遲提供),這對於及時進行線上預測至關重要。您不必建構及運作低延遲資料服務基礎架構,Vertex AI 特徵儲存庫 (舊版) 會為您完成這項工作,並視需要擴充。您會編寫產生特徵的邏輯,但會卸載提供特徵的作業。所有這些管理作業都包含在內,可減少建構新功能時的阻力,讓資料科學家專心工作,不必擔心部署問題。
降低訓練/應用偏差
當您在正式環境中使用的特徵資料分布,與用於訓練模型的特徵資料分布不同時,就會發生訓練/應用偏差。這種偏移通常會導致模型在訓練期間的效能,與實際工作環境中的效能有所差異。以下範例說明 Vertex AI 特徵儲存庫 (舊版) 如何解決訓練/服務偏差的潛在來源:
Vertex AI 特徵儲存庫 (舊版) 可確保特徵值只會匯入特徵儲存庫一次,且訓練和提供模型時都會重複使用相同值。如果沒有特徵商店,您可能會在訓練和服務之間,使用不同的程式碼路徑產生特徵。因此,訓練和提供模型時,特徵值可能有所不同。
Vertex AI 特徵儲存庫 (舊版) 提供時間點查詢功能,可擷取用於訓練的歷史資料。透過這些查詢,您就能只擷取預測前可用的特徵值,而非預測後的值,藉此減少資料外洩的風險。
Vertex AI 特徵儲存庫 (舊版) 可協助您偵測特徵資料分布隨時間發生的重大變化,也就是所謂的漂移。Vertex AI 特徵儲存庫 (舊版) 會持續追蹤匯入特徵儲存庫的特徵值分布情形。特徵漂移增加時,您可能需要重新訓練使用受影響特徵的模型。如要進一步瞭解如何偵測漂移,請參閱「查看特徵值異常狀況」。
配額與限制
Vertex AI 特徵儲存庫 (舊版) 會強制執行配額和限制,協助您設定用量限制來管理資源,並預防用量意外暴增的情況,進而保障 Google Cloud 使用者社群的權益。為避免遇到非預期的限制,請在「配額和限制」頁面中查看 Vertex AI 特徵儲存庫 (舊版) 配額。舉例來說,Vertex AI 特徵儲存庫 (舊版) 會針對線上服務節點數量和每分鐘可提出的線上服務要求數量設定配額。
資料保留
Vertex AI 特徵儲存庫 (舊版) 會保留特徵值,直到資料保留期限為止。這項限制是根據與特徵值相關聯的時間戳記,而非值匯入的時間。Vertex AI 特徵儲存庫 (舊版) 排程會刪除時間戳記超過上限的值。
定價
Vertex AI 特徵儲存庫 (舊版) 的價格取決於多項因素,例如您儲存的資料量,以及使用的特徵儲存庫線上節點數量。建立 featurestore 後,系統就會立即開始收費。詳情請參閱 Vertex AI 特徵儲存庫 (舊版) 定價。
[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-04 (世界標準時間)。"],[],[],null,["# Introduction to Vertex AI Feature Store (Legacy)\n\nVertex AI Feature Store (Legacy) provides a\ncentralized repository for organizing, storing, and serving ML features.\nUsing a central featurestore enables an organization to efficiently share,\ndiscover, and re-use ML features at scale, which can increase the velocity of\ndeveloping and deploying new ML applications.\n\nVertex AI Feature Store (Legacy) is a fully managed solution, which manages\nand scales the underlying infrastructure such as storage and compute\nresources. This solution means that data scientists can focus on the\nfeature computation logic instead of worrying about the challenges of deploying\nfeatures into production.\n\nVertex AI Feature Store (Legacy) is an integrated part of\nVertex AI. You can use Vertex AI Feature Store (Legacy)\nindependently or as part of Vertex AI workflows. For example, you\ncan fetch data from Vertex AI Feature Store (Legacy) to train custom or\nAutoML models in Vertex AI.\n\nVertex AI Feature Store (Legacy) is the predecessor of\nVertex AI Feature Store. To learn more about Vertex AI Feature Store,\nsee the [Vertex AI Feature Store documentation](/vertex-ai/docs/featurestore/latest/overview).\n\nOverview\n--------\n\nUse Vertex AI Feature Store (Legacy) to create and manage *featurestores* , *entity types* , and *features* . A featurestore is a top-level container for\nyour features and their values. When you set up a featurestore, permitted\nusers can add and share their features without additional engineering support.\nUsers can define features and then import (ingest) feature values from various\ndata sources. [Learn more about Vertex AI Feature Store (Legacy) data model and resources](/vertex-ai/docs/featurestore/concepts).\n\nAny permitted user can search and retrieve values from the featurestore. For\nexample, you can find features and then do a batch export to get training\ndata for ML model creation. You can also retrieve feature values in real time\nto perform fast online predictions.\n\nBenefits\n--------\n\nBefore using Vertex AI Feature Store (Legacy), you might have computed feature\nvalues and saved them in various locations such as tables in BigQuery\nand as files in Cloud Storage. Moreover, you might have built and managed\nseparate solutions for storage and the consumption of feature values. In\ncontrast, Vertex AI Feature Store (Legacy) provides a unified solution for\nbatch and online storage as well as the serving of ML features. The following\nsections details the benefits that Vertex AI Feature Store (Legacy) provides.\n\n### Share features across your organization\n\nIf you produce features in a featurestore, you can quickly share them with\nothers for training or serving tasks. Teams don't need to re-engineer features\nfor different projects or use cases. Also, because you can manage and serve\nfeatures from a central repository, you can maintain consistency across your\norganization and reduce duplicate efforts, particularly for high value\nfeatures.\n\nVertex AI Feature Store (Legacy) provides search and filter capabilities so\nthat others discover and reuse existing features. For each feature,\nyou can view relevant metadata to determine the quality and usage patterns of\nthe feature. For example, you can view the fraction of entities that have a\nvalid value for a feature (also known as *feature coverage*) and the statistical\ndistribution of feature values.\n\n### Managed solution for online serving at scale\n\nVertex AI Feature Store (Legacy) provides a managed solution for online\nfeature serving (low-latency serving), which is critical for making timely\nonline predictions. You don't need to build and operate low-latency data\nserving infrastructure; Vertex AI Feature Store (Legacy) does this for you and\nscales as needed. You code the logic to generate features but offload the task\nof serving features. All of this included management reduces the friction for\nbuilding new features, enabling data scientists to do their work without worrying\nabout deployment.\n\n### Mitigate training-serving skew\n\n*Training-serving skew* occurs when the feature data distribution that you use\nin production differs from the feature data distribution that was used to train\nyour model. This skew often results in discrepancies between a model's\nperformance during training and its performance in production. The following\nexamples describe how Vertex AI Feature Store (Legacy) can address potential\nsources of training-serving skew:\n\n- Vertex AI Feature Store (Legacy) ensures that a feature value is imported once into a featurestore and that same value is reused for both training and serving. Without a featurestore, you might have different code paths for generating features between training and serving. So, feature values might differ between training and serving.\n- Vertex AI Feature Store (Legacy) provides point-in-time lookups to fetch historical data for training. With these lookups, you can mitigate data leakage by fetching only the feature values that were available before a prediction and not after.\n\nFor more information about how to detect training-serving skew, see [View feature value anomalies](/vertex-ai/docs/featurestore/monitoring#view_feature_value_anomalies).\n\n### Detect drift\n\nVertex AI Feature Store (Legacy) helps you detect significant changes to your\nfeature data distribution over time, also known as *drift* .\nVertex AI Feature Store (Legacy) constantly tracks the distribution of feature\nvalues that are imported into the featurestore. As feature drift increases, you\nmight need to retrain models that are using the affected features. For more\ninformation about how to detect drift, see [View feature value anomalies](/vertex-ai/docs/featurestore/monitoring#view_feature_value_anomalies).\n\nQuotas and limits\n-----------------\n\nVertex AI Feature Store (Legacy) enforces quotas and limits to help you manage\nresources by setting your own usage limits and to protect the community of\nGoogle Cloud users by preventing unforeseen spikes in usage. To prevent you from\nhitting unplanned constraints, review Vertex AI Feature Store (Legacy) quotas\non the [Quotas and limits](/vertex-ai/quotas#featurestore) page. For example,\nVertex AI Feature Store (Legacy) sets a quota on the number of online serving\nnodes and a quota on the number of online serving requests that you can make per\nminute.\n\nData retention\n--------------\n\nVertex AI Feature Store (Legacy) keeps feature values up to the [data\nretention limit](/vertex-ai/quotas#featurestore). This limit is based on the\ntimestamp associated with the feature values, not when the values were imported.\nVertex AI Feature Store (Legacy) schedules to delete values with timestamps\nthat exceed the limit.\n\nPricing\n-------\n\nVertex AI Feature Store (Legacy) pricing is based on several factors, such as\nhow much data you store and the number of featurestore online nodes you use.\nCharges start right after you create a featurestore. For more information, see\n[Vertex AI Feature Store (Legacy) pricing](/vertex-ai/pricing#featurestore).\n\nWhat's next\n-----------\n\n- Learn about the Vertex AI Feature Store (Legacy) [data model and its\n resources](/vertex-ai/docs/featurestore/concepts).\n- Learn [how to set up a project and set Identity and Access Management permissions for\n Vertex AI Feature Store (Legacy)](/vertex-ai/docs/featurestore/setup).\n- View Vertex AI Feature Store (Legacy) quotas on the [Quotas and limits\n page](/vertex-ai/quotas#featurestore)."]]