[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-04 (世界標準時間)。"],[],[],null,["# Vision Warehouse overview\n\nVision Warehouse is an API that enables developers to integrate storage\nand AI-based search of unstructured media content (streaming video, images, and\nbatch videos) into existing tools and applications.\n\nVision Warehouse is a major component of Vertex AI Vision.\nIt serves as the storage repository and provides advanced search capabilities\nfor multiple data types and use cases. Specifically:\n\n- **Streaming video**: You can import live video streams and live video analytics data using the Vertex AI Vision platform application or Vision Warehouse API, and search for images using the Vision Warehouse API or Google Cloud console.\n- **Image**: You can import image and metadata using Vision Warehouse API, analyze images using Vision Warehouse API, and search for images using the Vision Warehouse API or Google Cloud console.\n- **Batch video**: You can import batch video and metadata using the Vision Warehouse API, analyze batch video using Vision Warehouse API, and search for batch videos using the Vision Warehouse API or Google Cloud console.\n\nAPI resources overview\n----------------------\n\n### Storage API resources\n\n**Corpus**: A container that holds media assets of a particular type. You can create multiple corpora to organize different types of media assets.\n\n**Asset**: A media object stored within a corpus. Assets can be images, batch videos, or video streams.\nA corpus typically contains many assets of the same type. You can specify annotations\nassociated with assets. Assets can also be grouped into collections for management.\n\n**Collection**: A resource within a corpus that serves as a container of references to assets.\n\n**Annotation**: User-supplied metadata or data derived from Vertex AI Vision that is associated with an asset. An asset can have multiple annotations.\n\n- Example 1: Specify a text annotation named \"video-title\" for batch video assets.\n- Example 2: Store analyzed data from Vertex AI Vision models as annotations. For example, object recognition labels in different video time frames can be stored as annotations.\n\n**Data schema**: Defines how an annotation is interpreted within a corpus. A data schema defines one annotation type and its search strategy. Each annotation must be associated with a data schema.\n\n### Search API resources\n\n**Index** (available to image and batch video verticals): A corpus-level resource that is a managed representation of analyzed assets and annotations. An index can be viewed as a dataset of embedding vectors and semantic restrictions that represents the meaning of the media content. Indexes can be deployed into index endpoints for search.\n\n**Index endpoint** (available to image and batch video verticals): A managed environment that serves Vision Warehouse indexes. Index endpoints provide a single point of access for sending search requests.\n| **Important:** `Index` and `IndexEndpoint` are not available in Streaming Warehouse. Users must use the [`projects.locations.corpora.searchAssets`](/vision-ai/docs/reference/rest/v1/projects.locations.corpora/searchAssets) method to search in Streaming Warehouse.\n\n**Search Configuration**: Stores various properties that affect search behaviors and search results.\n\n- Facet property (available to streaming video vertical): Creates a configuration to enable facet-based histogram search results.\n- Search criteria property (available to streaming video and batch video verticals): Creates a mapping between a custom search criteria and one or more data schema keys.\n\n**Search Hypernym**: A specific type of search config that lets you customize the search service's ability to recognize hypernyms of words. For example, users can specify \"animal\" as a hypernym of \"cat\" and \"dog\". Searching for \"animal\" will also return results with \"cat\" and \"dog\" in the index data.\n\n### Supported languages\n\nBatch Video Warehouse and Image Warehouse support the\nfollowing languages for semantic search:\n\n- English\n- Spanish\n- Portuguese\n- French\n- Japanese\n- Chinese\n\nStreaming Warehouse does not have language restriction.\n\nWhat's next\n-----------\n\n- Understand the key API user flows for each media vertical: [streaming video](/vision-ai/docs/create-manage-streaming-warehouse), [image](/vision-ai/docs/image-warehouse-overview), and [batch video](/vision-ai/docs/batch-video-warehouse-overview).\n- Explore Vision Warehouse [Quotas](/vision-ai/quotas#warehouse-quotas) and [Limits](/vision-ai/quotas#warehouse-limits).\n- Get familiar with [Pricing](/vision-ai/pricing).\n- Discover how to obtain [support](/vision-ai/docs/getting-support)."]]