About vector search

Memorystore for Redis supports storing and querying vector data. This page provides information about vector search on Memorystore for Redis.

Vector search on Memorystore for Redis is compatible with the open source LLM framework LangChain. Using vector search with LangChain lets you build solutions for the following use cases:

  • Retrieval Augmented Generation (RAG)
  • LLM cache
  • Recommendation engine
  • Semantic search
  • Image similarity search

The advantage of using Memorystore to store your Gen AI data, as opposed to other Google Cloud databases is Memorystore's speed. Vector search on Memorystore for Redis leverages multi-threaded queries, resulting in high query throughput (QPS) at low latency.

Memorystore also provides two distinct search approaches to help you find the right balance between speed and accuracy. The HNSW (Hierarchical Navigable Small World) option delivers fast, approximate results - ideal for large datasets where a close match is sufficient. If you require absolute precision, the 'FLAT' approach produces exact answers, though it may take slightly longer to process.

If you want to optimize your application for the fastest vector data read/write speeds, Memorystore for Redis is likely the best option for you.

You can use vector search to query data stored in your Redis instance. This feature is not available on Memorystore for Redis Cluster.