RedisVectorStore(
client: redis.client.Redis,
index_name: str,
embeddings: langchain_core.embeddings.embeddings.Embeddings,
content_field: str = "page_content",
vector_field: str = "vector",
)
Initialize a RedisVectorStore instance.
Parameters | |
---|---|
Name | Description |
client |
redis.Redis
The Redis client instance to be used for database operations, providing connectivity and command execution against the Redis instance. |
index_name |
str
The name assigned to the vector index within Redis. This name is used to identify the index for operations such as searching and indexing. |
embeddings |
Embeddings
An instance of an embedding service or model capable of generating vector embeddings from document content. This service is utilized to convert text documents into vector representations for storage and search. |
content_field |
str, optional
The field within the Redis HASH where document content is stored. This field is read to obtain document text for embedding during indexing operations. Defaults to 'page_content', which can be overridden if document content is stored under a different field. |
vector_field |
str, optional
The field within the Redis HASH designated for storing the vector embedding of the document. This field is used both when adding new documents to the store and when retrieving or searching documents based on their vector embeddings. Defaults to 'vector'. |
Methods
_similarity_search_by_vector_with_score_and_embeddings
_similarity_search_by_vector_with_score_and_embeddings(
query_embedding: typing.List[float], k: int = 4, **kwargs: typing.Any
) -> typing.List[
typing.Tuple[langchain_core.documents.base.Document, float, typing.List[float]]
]
Performs a similarity search by a vector with score and embeddings, offering various customization options via keyword arguments.
Parameters | |
---|---|
Name | Description |
query_embedding |
List[float]
A list of floats representing the embedding vector of the query for similarity search. |
k |
int, optional
The number of nearest neighbors to retrieve. Defaults to 4. |
\*\*kwargs |
Any
Additional keyword arguments allowing for customization of the search operation. Key options include: - 'distance_threshold' (float, optional): A threshold value for filtering results based on their distance or score. If not specified directly, it may use 'score_threshold' if provided. - 'distance_strategy' (str, optional): Strategy to apply when comparing distances or scores. Uses a default strategy if not specified. |
Returns | |
---|---|
Type | Description |
List[Tuple[Document, float, List[float]]] .. note:: The function dynamically adjusts its behavior based on the presence and values of keyword arguments. For instance, if a 'distance_threshold' is provided, only results meeting this threshold are returned. | A list of tuples, each containing a Document object, its distance (score) from the query embedding, and its own embedding vector. The Document object includes content and metadata. |
add_texts
add_texts(
texts: typing.Iterable[str],
metadatas: typing.Optional[typing.List[dict]] = None,
ids: typing.Optional[typing.List[str]] = None,
batch_size: typing.Optional[int] = 1000,
**kwargs: typing.Any
) -> typing.List[str]
Adds a collection of texts along with their metadata to a vector store, generating unique keys for each entry if not provided.
Parameters | |
---|---|
Name | Description |
texts |
Iterable[str]
An iterable collection of text documents to be added to the vector store. |
metadatas |
Optional[List[dict]], optional
An optional list of metadata dictionaries, where each dictionary corresponds to a text document in the same order as the |
ids |
Optional[List[str]], optional
An optional list of unique identifiers for each text document. If not provided, the system will generate unique identifiers for each document. If provided, the length of this list should match the length of |
batch_size |
int, optional
The number of documents to process in a single batch operation. This parameter helps manage memory and performance when adding a large number of documents. Defaults to 1000. |
Returns | |
---|---|
Type | Description |
List[str] .. note:: If both 'ids' and 'metadatas' are provided, they must be of the same length as the | A list containing the unique keys or identifiers for each added document. These keys can be used to retrieve or reference the documents within the vector store. |
delete
delete(
ids: typing.Optional[typing.List[str]] = None, **kwargs: typing.Any
) -> typing.Optional[bool]
Delete by vector ID or other criteria.
Parameter | |
---|---|
Name | Description |
ids |
typing.Optional[typing.List[str]]
List of ids to delete. |
Returns | |
---|---|
Type | Description |
Optional[bool] | True if deletion is successful, False otherwise, None if not implemented. |
drop_index
drop_index(client: redis.client.Redis, index_name: str, index_only: bool = True)
Drops an index from the Redis database. Optionally, it can also delete the documents associated with the index.
Parameters | |
---|---|
Name | Description |
client |
Redis
The Redis client instance used to connect to the database. This client provides the necessary commands to interact with the database. |
index_name |
str
The name of the index to be dropped. This name must exactly match the name of the existing index in the Redis database. |
index_only |
bool, optional
A flag indicating whether to drop only the index structure (True) or to also delete the documents associated with the index (False). Defaults to True, implying that only the index will be deleted. |
Exceptions | |
---|---|
Type | Description |
redis.RedisError | If any Redis-specific error occurs during the operation. This includes connection issues, authentication failures, or errors from executing the command to drop the index. Callers should handle these exceptions to manage error scenarios gracefully. |
from_texts
from_texts(
texts: typing.List[str],
embedding: langchain_core.embeddings.embeddings.Embeddings,
metadatas: typing.Optional[typing.List[dict]] = None,
ids: typing.Optional[typing.List[str]] = None,
client: typing.Optional[redis.client.Redis] = None,
index_name: typing.Optional[str] = None,
**kwargs: typing.Any
) -> langchain_google_memorystore_redis.vectorstore.RedisVectorStore
Creates an instance of RedisVectorStore from provided texts.
Parameters | |
---|---|
Name | Description |
texts |
List[str]
A list of text documents to be embedded and indexed. |
embedding |
Embeddings
An instance capable of generating embeddings for the provided text documents. |
metadatas |
Optional[List[dict]]
A list of dictionaries where each dictionary contains metadata corresponding to each text document in |
ids |
Optional[List[str]], optional
An optional list of unique identifiers for each text document. If not provided, the system will generate unique identifiers for each document. If provided, the length of this list should match the length of |
client |
redis.Redis
The Redis client instance to be used for database operations, providing connectivity and command execution against the Redis instance. |
index_name |
str
The name assigned to the vector index within Redis. This name is used to identify the index for operations such as searching and indexing. |
\*\*kwargs |
Any
Additional keyword arguments |
Returns | |
---|---|
Type | Description |
RedisVectorStore | An instance of RedisVectorStore that has been populated with the embeddings of the provided texts, along with their associated metadata. |
init_index
init_index(
client: redis.client.Redis,
index_config: langchain_google_memorystore_redis.vectorstore.IndexConfig,
)
Initializes a named VectorStore index in Redis with specified configurations.
max_marginal_relevance_search
max_marginal_relevance_search(
query: str,
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
**kwargs: typing.Any
) -> typing.List[langchain_core.documents.base.Document]
Performs a search to find documents that are both relevant to the query and diverse among each other based on Maximal Marginal Relevance (MMR).
The MMR algorithm optimizes a combination of relevance to the query and diversity among the results, controlled by the lambda_mult parameter.
Parameters | |
---|---|
Name | Description |
query |
str
The query string used to find similar documents. |
k |
int
The number of documents to return. |
fetch_k |
int
The number of documents to fetch for consideration. This should be larger than k to allow for diversity calculation. |
lambda_mult |
float
Controls the trade-off between relevance and diversity. Ranges from 0 (max diversity) to 1 (max relevance). |
Exceptions | |
---|---|
Type | Description |
ValueError | If lambda_mult is not in the range [0, 1]. |
Returns | |
---|---|
Type | Description |
List[Document] | A list of document objects selected based on maximal marginal relevance. |
similarity_search
similarity_search(
query: str, k: int = 4, **kwargs: typing.Any
) -> typing.List[langchain_core.documents.base.Document]
Conducts a similarity search based on the specified query, returning a list of the top 'k' documents that are most similar to the query.
Parameters | |
---|---|
Name | Description |
query |
str
The text query based on which similar documents are to be retrieved. |
k |
int
Specifies the number of documents to return, effectively setting a limit on the size of the result set. Defaults to 4. |
\*\*kwargs |
Any
A flexible argument allowing for the inclusion of additional search parameters or options. |
Exceptions | |
---|---|
Type | Description |
ValueError | If any of the provided search parameters are invalid or if the search operation encounters an error due to misconfiguration or execution issues within the search backend. |
Returns | |
---|---|
Type | Description |
List[Document] | A list containing up to 'k' Document objects, ranked by their similarity to the query. These documents represent the most relevant results found by the search operation, subject to the additional constraints and configurations specified. |
similarity_search_by_vector
similarity_search_by_vector(
embedding: typing.List[float], k: int = 4, **kwargs: typing.Any
) -> typing.List[langchain_core.documents.base.Document]
Performs a similarity search for the given embedding and returns the top k most similar Document objects, discarding their similarity scores.
Parameters | |
---|---|
Name | Description |
embedding |
List[float]
The query embedding for the similarity search. |
k |
int
The number of top documents to return. |
\*\*kwargs |
Any
Additional keyword arguments to pass to the search. |
Returns | |
---|---|
Type | Description |
List[Document] | A list containing up to 'k' Document objects, ranked by their similarity to the query. These documents represent the most relevant results found by the search operation, subject to the additional constraints and configurations specified. |
similarity_search_with_score
similarity_search_with_score(
query: str, k: int = 4, **kwargs: typing.Any
) -> typing.List[typing.Tuple[langchain_core.documents.base.Document, float]]
Performs a similarity search using the given query, returning documents and their similarity scores.
Parameters | |
---|---|
Name | Description |
query |
str
The query string to search for. |
k |
int
The number of closest documents to return. |
\*\*kwargs |
Any
Additional keyword arguments for future use. |
Returns | |
---|---|
Type | Description |
List[Tuple[Document, float]] | A ranked list of tuples, each containing a Document object and its corresponding similarity score. The list includes up to 'k' entries, representing the documents most relevant to the query according to the similarity scores. |