스트리밍 가져오기를 사용하면 특성 값을 실시간으로 업데이트할 수 있습니다. 이 방법은 온라인 서빙에 사용 가능한 최신 데이터를 우선시할 때 유용합니다. 예를 들어 스트리밍 이벤트 데이터를 가져올 수 있으며 Vertex AI Feature Store(기존)는 몇 초 내에 이 데이터를 온라인 제공 시나리오에서 사용할 수 있습니다.
데이터를 백필해야 하거나 일괄적으로 특성 값을 계산하는 경우 일괄 가져오기를 사용합니다. 스트리밍 가져오기 요청과 비교했을 때 일괄 가져오기 요청은 더 큰 페이로드를 처리할 수 있지만 완료되는 데 더 오래 걸릴 수 있습니다.
온라인 소매 조직은 사용자의 현재 활동을 사용하여 맞춤설정된 쇼핑 환경을 제공할 수 있습니다. 사용자가 웹사이트를 탐색하면 활동을 피처스토어에 캡처한 후 바로 온라인 예측에 필요한 모든 정보를 제공할 수 있습니다. 이 실시간 가져오기와 제공을 통해 쇼핑 세션 중에 고객에게 유용한 관련 추천 항목을 표시할 수 있습니다.
온라인 스토리지 노드 사용량
온라인 저장소에 특성 값을 작성하면 피처스토어의 CPU 리소스(온라인 스토리지 노드)가 사용됩니다. CPU 사용량을 모니터링하여 수요가 공급을 초과하지 않는지 확인합니다. 수요가 공급을 초과하면 오류가 발생합니다. 이러한 오류를 방지하려면 사용률이 약 70% 이하가 되는 것이 좋습니다. 값을 정기적으로 초과하면 피처스토어를 업데이트하여 노드 수를 늘리거나 자동 확장을 사용하면 됩니다. 자세한 내용은 피처스토어 관리를 참조하세요.
스트리밍 가져오기
특정 특성에 값을 작성합니다. 특성 값은 가져오기 요청의 일부로 포함되어야 합니다. 데이터 소스에서 직접 데이터를 스트리밍할 수 없습니다.
최근에 만든 기능에 쓰는 경우 새 기능이 아직 전파되지 않았을 수 있으므로 쓰기 전에 몇 분 정도 기다립니다. 그렇지 않으면 resource not found 오류가 표시될 수 있습니다.
쓰기당 항목 하나의 특성 값만 가져올 수 있습니다. 특정 프로젝트 및 리전의 경우 최대 10개의 서로 다른 항목 유형 내에서 여러 항목의 특성 값을 동시에 작성할 수 있습니다. 이 한도에는 지정된 프로젝트와 리전의 모든 피처스토어에 대한 모든 스트리밍 가져오기 요청이 포함됩니다. 이 한도를 초과하면 Vertex AI Feature Store(기존)에서 모든 데이터를 오프라인 저장소에 쓰지 못할 수 있습니다. 이 경우 Vertex AI Feature Store(기존)가 로그 탐색기에 오류를 로깅합니다. 자세한 내용은 스트리밍 가져오기에 대한 오프라인 스토리지 쓰기 오류 모니터링을 참조하세요.
TIME_STAMP(선택사항): 특성이 생성된 시간. 타임스탬프는 RFC3339 UTC 형식이어야 합니다.
HTTP 메서드 및 URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/featurestores/FEATURESTORE_ID/entityTypes/ENTITY_TYPE_ID:writeFeatureValues
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-09-08(UTC)"],[],[],null,["# Streaming import\n\n| To learn more,\n| run the \"Example Feature Store workflow with sample data\" notebook in one of the following\n| environments:\n|\n| [Open in Colab](https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/feature_store_legacy/sdk-feature-store.ipynb)\n|\n|\n| \\|\n|\n| [Open in Colab Enterprise](https://console.cloud.google.com/vertex-ai/colab/import/https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fofficial%2Ffeature_store_legacy%2Fsdk-feature-store.ipynb)\n|\n|\n| \\|\n|\n| [Open\n| in Vertex AI Workbench](https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fofficial%2Ffeature_store_legacy%2Fsdk-feature-store.ipynb)\n|\n|\n| \\|\n|\n| [View on GitHub](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/feature_store_legacy/sdk-feature-store.ipynb)\n\nStreaming import lets you make real-time updates to feature values. This\nmethod is useful when having the latest available data for online serving is a\npriority. For example, you can import streaming event data and, within a few\nseconds, Vertex AI Feature Store (Legacy) makes that data available for online\nserving scenarios.\n\nIf you must backfill data or if you compute feature values in batch, use [batch\nimport](/vertex-ai/docs/featurestore/ingesting-batch). Compared to streaming import requests, [batch\nimport](/vertex-ai/docs/featurestore/ingesting-batch) requests can handle larger payloads but\ntake longer to complete.\n\nFor information about the oldest feature value timestamp that you can import,\nsee [Vertex AI Feature Store (Legacy)](/vertex-ai/docs/quotas#featurestore) in [Quotas and limits](/vertex-ai/docs/quotas).\nYou can't import feature values for which the timestamps indicate future dates or times.\n\nExample use case\n----------------\n\nAn online retail organization might provide a personalized shopping experience\nby using the current activity of a user. As users navigate through the website,\nyou can capture their activity into a featurestore and then, soon\nafter, serve all that information for online predictions. This real-time\nimport and serving can help you show useful and relevant recommendations to\ncustomers during their shopping session.\n\nOnline storage node usage\n-------------------------\n\nWriting feature values to an online store uses the featurestore's CPU resources\n(online storage nodes). [Monitor](/vertex-ai/docs/featurestore/monitoring#featurestore) your CPU\nusage to check that demand doesn't exceed supply, which can lead to serving\nerrors. We recommend around a 70% usage rate or lower to avoid these errors. If\nyou regularly exceed that value, you can update your featurestore to increase\nthe number of nodes or use autoscaling. For more information, see [Manage\nfeaturestores](/vertex-ai/docs/featurestore/managing-featurestores).\n\nStreaming import\n----------------\n\nWrite a value to a particular feature. The feature value must be included as\npart of the import request. You can't stream data directly from a data\nsource.\n\nIf you're writing to recently created features, wait a few minutes before you\ndo so because the new features might not have propagated yet. If you don't, you\nmight see a `resource not found` error.\n\nYou can import feature values for only one entity per write. For any specific project and region, you can simultaneously write feature values for multiple entities within a maximum of ten different entity types. This limit includes streaming import requests to all\nfeaturestores in a given project and region. If you exceed this limit,\nVertex AI Feature Store (Legacy) might not write all of your data to the\noffline store. If this occurs, Vertex AI Feature Store (Legacy) logs the error in the **Logs Explorer** . For more information, see [Monitor offline storage write errors for streaming import](/vertex-ai/docs/featurestore/monitoring#monitor_offline_storage_write_errors). \n\n### REST\n\n\nTo import feature values for existing features, send a POST request by using the\n[featurestores.entityTypes.writeFeatureValues](/vertex-ai/docs/reference/rest/v1/projects.locations.featurestores.entityTypes/writeFeatureValues)\nmethod. If the names of the source data columns and the destination feature IDs\nare different, include the `sourceField` parameter. Note that [featurestores.entityTypes.writeFeatureValues](/vertex-ai/docs/reference/rest/v1/projects.locations.featurestores.entityTypes/writeFeatureValues) lets you import feature values for only one entity at a time.\n\n\nBefore using any of the request data,\nmake the following replacements:\n\n- \u003cvar translate=\"no\"\u003eLOCATION\u003c/var\u003e: Region where the featurestore is created. For example, `us-central1`.\n- \u003cvar translate=\"no\"\u003ePROJECT\u003c/var\u003e: Your [project ID](/resource-manager/docs/creating-managing-projects#identifiers).\n- \u003cvar translate=\"no\"\u003eFEATURESTORE_ID\u003c/var\u003e: ID of the featurestore.\n- \u003cvar translate=\"no\"\u003eENTITY_TYPE_ID\u003c/var\u003e: ID of the entity type.\n- \u003cvar translate=\"no\"\u003eFEATURE_ID\u003c/var\u003e: ID of an existing feature in the featurestore to write values for.\n- \u003cvar translate=\"no\"\u003eVALUE_TYPE\u003c/var\u003e: The [value\n type](/vertex-ai/docs/reference/rest/v1/projects.locations.featurestores.entityTypes.features#ValueType) of the feature.\n- \u003cvar translate=\"no\"\u003eVALUE\u003c/var\u003e: Value for the feature.\n- \u003cvar translate=\"no\"\u003eTIME_STAMP\u003c/var\u003e (optional): The time at which the feature was generated. The timestamp must be in the RFC3339 UTC format.\n\n\nHTTP method and URL:\n\n```\nPOST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/featurestores/FEATURESTORE_ID/entityTypes/ENTITY_TYPE_ID:writeFeatureValues\n```\n\n\nRequest JSON body:\n\n```\n{\n \"payloads\": [\n {\n \"entityId\": \"ENTITY_ID\",\n \"featureValues\": {\n \"FEATURE_ID\": {\n \"VALUE_TYPE\": VALUE,\n \"metadata\": {\"generate_time\": \"TIME_STAMP\"}\n }\n }\n }\n ]\n}\n```\n\nTo send your request, choose one of these options: \n\n#### curl\n\n| **Note:** The following command assumes that you have logged in to the `gcloud` CLI with your user account by running [`gcloud init`](/sdk/gcloud/reference/init) or [`gcloud auth login`](/sdk/gcloud/reference/auth/login) , or by using [Cloud Shell](/shell/docs), which automatically logs you into the `gcloud` CLI . You can check the currently active account by running [`gcloud auth list`](/sdk/gcloud/reference/auth/list).\n\n\nSave the request body in a file named `request.json`,\nand execute the following command:\n\n```\ncurl -X POST \\\n -H \"Authorization: Bearer $(gcloud auth print-access-token)\" \\\n -H \"Content-Type: application/json; charset=utf-8\" \\\n -d @request.json \\\n \"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/featurestores/FEATURESTORE_ID/entityTypes/ENTITY_TYPE_ID:writeFeatureValues\"\n```\n\n#### PowerShell\n\n| **Note:** The following command assumes that you have logged in to the `gcloud` CLI with your user account by running [`gcloud init`](/sdk/gcloud/reference/init) or [`gcloud auth login`](/sdk/gcloud/reference/auth/login) . You can check the currently active account by running [`gcloud auth list`](/sdk/gcloud/reference/auth/list).\n\n\nSave the request body in a file named `request.json`,\nand execute the following command:\n\n```\n$cred = gcloud auth print-access-token\n$headers = @{ \"Authorization\" = \"Bearer $cred\" }\n\nInvoke-WebRequest `\n -Method POST `\n -Headers $headers `\n -ContentType: \"application/json; charset=utf-8\" `\n -InFile request.json `\n -Uri \"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/featurestores/FEATURESTORE_ID/entityTypes/ENTITY_TYPE_ID:writeFeatureValues\" | Select-Object -Expand Content\n```\n\nYou should receive a successful status code (2xx) and an empty response.\n\n### Python\n\nTo learn how to install or update the Vertex AI SDK for Python, see [Install the Vertex AI SDK for Python](/vertex-ai/docs/start/use-vertex-ai-python-sdk).\n\nFor more information, see the\n[Python API reference documentation](/python/docs/reference/aiplatform/latest).\n\n from google.cloud import aiplatform\n\n\n def write_feature_values_sample(\n project: str, location: str, entity_type_id: str, featurestore_id: str\n ):\n\n aiplatform.init(project=project, location=location)\n\n my_entity_type = aiplatform.featurestore.EntityType(\n entity_type_name=entity_type_id, featurestore_id=featurestore_id\n )\n\n my_data = {\n \"movie_01\": {\n \"title\": \"The Shawshank Redemption\",\n \"average_rating\": 4.7,\n \"genre\": \"Drama\",\n },\n }\n\n my_entity_type.write_feature_values(instances=my_data)\n\n### Additional languages\n\nYou can [install](/vertex-ai/docs/start/client-libraries) and use the\nfollowing Vertex AI client libraries to call the\nVertex AI API. Cloud Client Libraries provide an optimized developer\nexperience by using the natural conventions and\nstyles of each supported language.\n\n- [Java](/java/docs/reference/google-cloud-aiplatform/latest/overview)\n- [Node.js](/nodejs/docs/reference/aiplatform/latest)\n\n\u003cbr /\u003e\n\nWhat's next\n-----------\n\n- Learn how to [monitor offline storage write errors for streaming import](/vertex-ai/docs/featurestore/monitoring#monitor_offline_storage_write_errors).\n- Learn how to serve features through [online\n serving](/vertex-ai/docs/featurestore/serving-online) or [batch\n serving](/vertex-ai/docs/featurestore/serving-batch).\n- [Troubleshoot](/vertex-ai/docs/general/troubleshooting#feature-store) common Vertex AI Feature Store (Legacy) issues."]]