[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-10。"],[],[],null,["# Features and limitations\n\nThis page provides details about vector search features and limitations.\n\nAvailability\n------------\n\nVector search is available on all Memorystore for Redis Cluster versions across all tiers and all the supported regions.\n\nOnly instances created after the launch date of September 13, 2024 have vector search enabled.\n\nIndex Restrictions\n------------------\n\nThe following outlines the limitations of the index:\n\n- The maximum number of attributes in an index cannot exceed 10.\n- A vector's dimension cannot exceed 32,768.\n- The M value for HNSW must not go beyond 2M.\n- The EF Construct value for HNSW must not exceed 4096.\n- The EF Runtime value for HNSW must also not surpass 4096.\n\nImpacts on Performance\n----------------------\n\nWhen considering the performance of vector search, there are some important variables to consider.\n\n### Node Type\n\nVector search facilitates vertical scaling through the integration of thread pools dedicated to executing vector search operations. This means that performance will be tied to the number of vCPUs on each node in your cluster. For details on the number of vCPUs available on each node type, see [Cluster and node specification](/memorystore/docs/cluster/cluster-node-specification#node_characteristics).\n\n### Number of shards\n\nMemorystore for Redis Cluster implements a local indexing technique for all vectors. This means that the index stored on each shard contains only the documents contained on that shard. Because of this, the speed of indexing and the number of total vectors will scale linearly with the number of shards in the cluster.\n\nSince each local index only contains the contents of a single shard, searching the cluster requires searching each shard in the cluster and aggregating the results. With a stable amount of vectors, increasing the number of shards will improve the search performance logarithmically for HNSW indexes and linearly for FLAT indexes due to fewer vectors being contained on each local index.\n\nNote that due to the increased amount of work needed to search all shards, the observable latency to complete a given search request may increase as more shards are added. Despite this, even the largest clusters support single digit millisecond latencies.\n\n### Number of replicas\n\nAdding additional replicas will increase search throughput linearly by allowing load balancing of search requests to read replicas.\n\nScaling Events\n--------------\n\nUpon resizing your Redis cluster, the documents within your indexes will be moved to uniformly distribute the data across the new shard count. When this happens, documents that are moved across nodes will be indexed in the background. After the scaling operation completes, you can monitor the value of `mutation_queue_size` in the [FT.INFO](/memorystore/docs/cluster/ftinfo) output to see the progress of re-indexing due to resizing your cluster.\n\nMemory Consumption\n------------------\n\nVectors are duplicated, being stored both within the Redis keyspace and the vector search algorithm.\n\nTransactions\n------------\n\nOwing to the asynchronous nature of executing tasks by thread pools, vector search operations don't adhere to transactional semantics."]]