Stay organized with collections
Save and categorize content based on your preferences.
Memorystore for Redis Cluster supports storing and querying vector data. This page provides
information about vector search on Memorystore for Redis Cluster.
Vector search on Memorystore for Redis Cluster is compatible with the open-source LLM
framework LangChain.
Using vector search with LangChain lets you build solutions for the following
use cases:
Retrieval Augmented Generation (RAG)
LLM cache
Recommendation engine
Semantic search
Image similarity search
The advantage of using Memorystore to store your generative AI
data is Memorystore's speed. Vector search on
Memorystore for Redis Cluster leverages multi-threaded queries, resulting in
high query throughput (QPS) at low latency.
Memorystore also provides two distinct search approaches to help you find the right balance between speed and accuracy. The HNSW (Hierarchical Navigable Small World) option delivers fast, approximate results - ideal for large datasets where a close match is sufficient. If you require absolute precision, the 'FLAT' approach produces exact answers, though it may take slightly longer to process.
If you want to optimize your application for the fastest vector data read and write
speeds, Memorystore for Redis Cluster is likely the best option for you.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[],[],null,["# About vector search\n\nMemorystore for Redis Cluster supports storing and querying vector data. This page provides\ninformation about vector search on Memorystore for Redis Cluster.\n| **Important:** To use vector search, your cluster must be created after the feature launch date of September 13, 2024. If your cluster was created prior to this date, you will need to [create](/memorystore/docs/cluster/create-instances) a new cluster to use this feature.\n\nVector search on Memorystore for Redis Cluster is compatible with the open-source LLM\nframework [LangChain](https://python.langchain.com/docs/get_started/introduction).\nUsing vector search with LangChain lets you build solutions for the following\nuse cases:\n\n- Retrieval Augmented Generation (RAG)\n- LLM cache\n- Recommendation engine\n- Semantic search\n- Image similarity search\n\nThe advantage of using Memorystore to store your generative AI\ndata is Memorystore's speed. Vector search on\nMemorystore for Redis Cluster leverages multi-threaded queries, resulting in\nhigh query throughput (QPS) at low latency.\n\nMemorystore also provides two distinct search approaches to help you find the right balance between speed and accuracy. The HNSW (Hierarchical Navigable Small World) option delivers fast, approximate results - ideal for large datasets where a close match is sufficient. If you require absolute precision, the 'FLAT' approach produces exact answers, though it may take slightly longer to process.\n\nIf you want to optimize your application for the fastest vector data read and write\nspeeds, Memorystore for Redis Cluster is likely the best option for you."]]