AlloyDBVectorStore(
key: object,
engine: langchain_google_alloydb_pg.engine.AlloyDBEngine,
vs: langchain_google_alloydb_pg.async_vectorstore.AsyncAlloyDBVectorStore,
)
Google AlloyDB Vector Store class
Properties
embeddings
Access the query embedding object if available.
Methods
AlloyDBVectorStore
AlloyDBVectorStore(
key: object,
engine: langchain_google_alloydb_pg.engine.AlloyDBEngine,
vs: langchain_google_alloydb_pg.async_vectorstore.AsyncAlloyDBVectorStore,
)
AlloyDBVectorStore constructor.
Parameters | |
---|---|
Name | Description |
key |
object
Prevent direct constructor usage. |
engine |
AlloyDBEngine
Connection pool engine for managing connections to Postgres database. |
vs |
AsyncAlloyDBVectorstore
The async only VectorStore implementation |
Exceptions | |
---|---|
Type | Description |
Exception |
If called directly by user. |
_select_relevance_score_fn
_select_relevance_score_fn() -> typing.Callable[[float], float]
Select a relevance function based on distance strategy.
aadd_documents
aadd_documents(
documents: typing.List[langchain_core.documents.base.Document],
ids: typing.Optional[typing.List] = None,
**kwargs: typing.Any
) -> typing.List[str]
Embed documents and add to the table.
Exceptions | |
---|---|
Type | Description |
InvalidTextRepresentationErro |
aadd_embeddings
aadd_embeddings(
texts: typing.Iterable[str],
embeddings: typing.List[typing.List[float]],
metadatas: typing.Optional[typing.List[dict]] = None,
ids: typing.Optional[typing.List[str]] = None,
**kwargs: typing.Any
) -> typing.List[str]
Add data along with embeddings to the table.
aadd_images
aadd_images(
uris: typing.List[str],
metadatas: typing.Optional[typing.List[dict]] = None,
ids: typing.Optional[typing.List[str]] = None,
**kwargs: typing.Any
) -> typing.List[str]
Embed images and add to the table.
aadd_texts
aadd_texts(
texts: typing.Iterable[str],
metadatas: typing.Optional[typing.List[dict]] = None,
ids: typing.Optional[typing.List] = None,
**kwargs: typing.Any
) -> typing.List[str]
Embed texts and add to the table.
Exceptions | |
---|---|
Type | Description |
InvalidTextRepresentationErro |
aapply_vector_index
aapply_vector_index(
index: langchain_google_alloydb_pg.indexes.BaseIndex,
name: typing.Optional[str] = None,
concurrently: bool = False,
) -> None
Create an index on the vector store table.
add_documents
add_documents(
documents: typing.List[langchain_core.documents.base.Document],
ids: typing.Optional[typing.List] = None,
**kwargs: typing.Any
) -> typing.List[str]
Embed documents and add to the table.
Exceptions | |
---|---|
Type | Description |
InvalidTextRepresentationErro |
add_embeddings
add_embeddings(
texts: typing.Iterable[str],
embeddings: typing.List[typing.List[float]],
metadatas: typing.Optional[typing.List[dict]] = None,
ids: typing.Optional[typing.List[str]] = None,
**kwargs: typing.Any
) -> typing.List[str]
Add data along with embeddings to the table.
add_images
add_images(
uris: typing.List[str],
metadatas: typing.Optional[typing.List[dict]] = None,
ids: typing.Optional[typing.List[str]] = None,
**kwargs: typing.Any
) -> typing.List[str]
Embed images and add to the table.
add_texts
add_texts(
texts: typing.Iterable[str],
metadatas: typing.Optional[typing.List[dict]] = None,
ids: typing.Optional[typing.List] = None,
**kwargs: typing.Any
) -> typing.List[str]
Embed texts and add to the table.
Exceptions | |
---|---|
Type | Description |
InvalidTextRepresentationErro |
adelete
adelete(
ids: typing.Optional[typing.List] = None, **kwargs: typing.Any
) -> typing.Optional[bool]
Delete records from the table.
Exceptions | |
---|---|
Type | Description |
InvalidTextRepresentationErro |
adrop_vector_index
adrop_vector_index(index_name: typing.Optional[str] = None) -> None
Drop the vector index.
afrom_documents
afrom_documents(
documents: typing.List[langchain_core.documents.base.Document],
embedding: langchain_core.embeddings.embeddings.Embeddings,
engine: langchain_google_alloydb_pg.engine.AlloyDBEngine,
table_name: str,
schema_name: str = "public",
ids: typing.Optional[typing.List] = None,
content_column: str = "content",
embedding_column: str = "embedding",
metadata_columns: typing.List[str] = [],
ignore_metadata_columns: typing.Optional[typing.List[str]] = None,
id_column: str = "langchain_id",
metadata_json_column: str = "langchain_metadata",
distance_strategy: langchain_google_alloydb_pg.indexes.DistanceStrategy = DistanceStrategy.COSINE_DISTANCE,
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
index_query_options: typing.Optional[
langchain_google_alloydb_pg.indexes.QueryOptions
] = None,
**kwargs: typing.Any
) -> langchain_google_alloydb_pg.vectorstore.AlloyDBVectorStore
Create an AlloyDBVectorStore instance from documents.
Parameters | |
---|---|
Name | Description |
documents |
List[Document]
Documents to add to the vector store. |
embedding |
Embeddings
Text embedding model to use. |
engine |
AlloyDBEngine
Connection pool engine for managing connections to AlloyDB database. |
table_name |
str
Name of an existing table. |
schema_name |
str, optional
Name of the database schema. Defaults to "public". |
content_column |
str, optional
Column that represent a Document’s page_content. Defaults to "content". |
embedding_column |
str, optional
Column for embedding vectors. The embedding is generated from the document value. Defaults to "embedding". |
metadata_columns |
List[str], optional
Column(s) that represent a document's metadata. Defaults to an empty list. |
ignore_metadata_columns |
Optional[List[str]], optional
Column(s) to ignore in pre-existing tables for a document's metadata. Can not be used with metadata_columns. Defaults to None. |
id_column |
str, optional
Column that represents the Document's id. Defaults to "langchain_id". |
metadata_json_column |
str, optional
Column to store metadata as JSON. Defaults to "langchain_metadata". |
distance_strategy |
DistanceStrategy
Distance strategy to use for vector similarity search. Defaults to COSINE_DISTANCE. |
k |
int
Number of Documents to return from search. Defaults to 4. |
fetch_k |
int
Number of Documents to fetch to pass to MMR algorithm. |
lambda_mult |
float
Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. |
index_query_options |
QueryOptions
Index query option. |
Exceptions | |
---|---|
Type | Description |
InvalidTextRepresentationErro |
afrom_texts
afrom_texts(
texts: typing.List[str],
embedding: langchain_core.embeddings.embeddings.Embeddings,
engine: langchain_google_alloydb_pg.engine.AlloyDBEngine,
table_name: str,
schema_name: str = "public",
metadatas: typing.Optional[typing.List[dict]] = None,
ids: typing.Optional[typing.List] = None,
content_column: str = "content",
embedding_column: str = "embedding",
metadata_columns: typing.List[str] = [],
ignore_metadata_columns: typing.Optional[typing.List[str]] = None,
id_column: str = "langchain_id",
metadata_json_column: str = "langchain_metadata",
distance_strategy: langchain_google_alloydb_pg.indexes.DistanceStrategy = DistanceStrategy.COSINE_DISTANCE,
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
index_query_options: typing.Optional[
langchain_google_alloydb_pg.indexes.QueryOptions
] = None,
**kwargs: typing.Any
) -> langchain_google_alloydb_pg.vectorstore.AlloyDBVectorStore
Create an AlloyDBVectorStore instance from texts.
Parameters | |
---|---|
Name | Description |
texts |
List[str]
Texts to add to the vector store. |
embedding |
Embeddings
Text embedding model to use. |
engine |
AlloyDBEngine
Connection pool engine for managing connections to AlloyDB database. |
table_name |
str
Name of an existing table. |
schema_name |
str, optional
Name of the database schema. Defaults to "public". |
metadatas |
Optional[List[dict]], optional
List of metadatas to add to table records. Defaults to None. |
content_column |
str, optional
Column that represent a Document’s page_content. Defaults to "content". |
embedding_column |
str, optional
Column for embedding vectors. The embedding is generated from the document value. Defaults to "embedding". |
metadata_columns |
List[str], optional
Column(s) that represent a document's metadata. Defaults to an empty list. |
ignore_metadata_columns |
Optional[List[str]], optional
Column(s) to ignore in pre-existing tables for a document's metadata. Can not be used with metadata_columns. Defaults to None. |
id_column |
str, optional
Column that represents the Document's id. Defaults to "langchain_id". |
metadata_json_column |
str, optional
Column to store metadata as JSON. Defaults to "langchain_metadata". |
distance_strategy |
DistanceStrategy
Distance strategy to use for vector similarity search. Defaults to COSINE_DISTANCE. |
k |
int
Number of Documents to return from search. Defaults to 4. |
fetch_k |
int
Number of Documents to fetch to pass to MMR algorithm. |
lambda_mult |
float
Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. |
index_query_options |
QueryOptions
Index query option. |
Exceptions | |
---|---|
Type | Description |
InvalidTextRepresentationErro |
ais_valid_index
ais_valid_index(index_name: typing.Optional[str] = None) -> bool
Check if index exists in the table.
amax_marginal_relevance_search
amax_marginal_relevance_search(
query: str,
k: typing.Optional[int] = None,
fetch_k: typing.Optional[int] = None,
lambda_mult: typing.Optional[float] = None,
filter: typing.Optional[str] = None,
**kwargs: typing.Any
) -> typing.List[langchain_core.documents.base.Document]
Return docs selected using the maximal marginal relevance.
amax_marginal_relevance_search_by_vector
amax_marginal_relevance_search_by_vector(
embedding: typing.List[float],
k: typing.Optional[int] = None,
fetch_k: typing.Optional[int] = None,
lambda_mult: typing.Optional[float] = None,
filter: typing.Optional[str] = None,
**kwargs: typing.Any
) -> typing.List[langchain_core.documents.base.Document]
Return docs selected using the maximal marginal relevance.
amax_marginal_relevance_search_with_score_by_vector
amax_marginal_relevance_search_with_score_by_vector(
embedding: typing.List[float],
k: typing.Optional[int] = None,
fetch_k: typing.Optional[int] = None,
lambda_mult: typing.Optional[float] = None,
filter: typing.Optional[str] = None,
**kwargs: typing.Any
) -> typing.List[typing.Tuple[langchain_core.documents.base.Document, float]]
Return docs and distance scores selected using the maximal marginal relevance.
apply_vector_index
apply_vector_index(
index: langchain_google_alloydb_pg.indexes.BaseIndex,
name: typing.Optional[str] = None,
concurrently: bool = False,
) -> None
Create an index on the vector store table.
areindex
areindex(index_name: typing.Optional[str] = None) -> None
Re-index the vector store table.
aset_maintenance_work_mem
aset_maintenance_work_mem(num_leaves: int, vector_size: int) -> None
Set database maintenance work memory (for ScaNN index creation).
asimilarity_search
asimilarity_search(
query: str,
k: typing.Optional[int] = None,
filter: typing.Optional[str] = None,
**kwargs: typing.Any
) -> typing.List[langchain_core.documents.base.Document]
Return docs selected by similarity search on query.
asimilarity_search_by_vector
asimilarity_search_by_vector(
embedding: typing.List[float],
k: typing.Optional[int] = None,
filter: typing.Optional[str] = None,
**kwargs: typing.Any
) -> typing.List[langchain_core.documents.base.Document]
Return docs selected by vector similarity search.
asimilarity_search_image
asimilarity_search_image(
image_uri: str,
k: typing.Optional[int] = None,
filter: typing.Optional[str] = None,
**kwargs: typing.Any
) -> typing.List[langchain_core.documents.base.Document]
Return docs selected by similarity search on query.
asimilarity_search_with_score
asimilarity_search_with_score(
query: str,
k: typing.Optional[int] = None,
filter: typing.Optional[str] = None,
**kwargs: typing.Any
) -> typing.List[typing.Tuple[langchain_core.documents.base.Document, float]]
Return docs and distance scores selected by similarity search on query.
asimilarity_search_with_score_by_vector
asimilarity_search_with_score_by_vector(
embedding: typing.List[float],
k: typing.Optional[int] = None,
filter: typing.Optional[str] = None,
**kwargs: typing.Any
) -> typing.List[typing.Tuple[langchain_core.documents.base.Document, float]]
Return docs and distance scores selected by vector similarity search.
create
create(
engine: langchain_google_alloydb_pg.engine.AlloyDBEngine,
embedding_service: langchain_core.embeddings.embeddings.Embeddings,
table_name: str,
schema_name: str = "public",
content_column: str = "content",
embedding_column: str = "embedding",
metadata_columns: typing.List[str] = [],
ignore_metadata_columns: typing.Optional[typing.List[str]] = None,
id_column: str = "langchain_id",
metadata_json_column: typing.Optional[str] = "langchain_metadata",
distance_strategy: langchain_google_alloydb_pg.indexes.DistanceStrategy = DistanceStrategy.COSINE_DISTANCE,
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
index_query_options: typing.Optional[
langchain_google_alloydb_pg.indexes.QueryOptions
] = None,
) -> langchain_google_alloydb_pg.vectorstore.AlloyDBVectorStore
Create an AlloyDBVectorStore instance.
Parameters | |
---|---|
Name | Description |
engine |
AlloyDBEngine
Connection pool engine for managing connections to AlloyDB database. |
embedding_service |
Embeddings
Text embedding model to use. |
table_name |
str
Name of an existing table. |
schema_name |
str, optional
Name of the database schema. Defaults to "public". |
content_column |
str
Column that represent a Document’s page_content. Defaults to "content". |
embedding_column |
str
Column for embedding vectors. The embedding is generated from the document value. Defaults to "embedding". |
metadata_columns |
List[str]
Column(s) that represent a document's metadata. |
ignore_metadata_columns |
List[str]
Column(s) to ignore in pre-existing tables for a document's metadata. Can not be used with metadata_columns. Defaults to None. |
id_column |
str
Column that represents the Document's id. Defaults to "langchain_id". |
metadata_json_column |
str
Column to store metadata as JSON. Defaults to "langchain_metadata". |
distance_strategy |
DistanceStrategy
Distance strategy to use for vector similarity search. Defaults to COSINE_DISTANCE. |
k |
int
Number of Documents to return from search. Defaults to 4. |
fetch_k |
int
Number of Documents to fetch to pass to MMR algorithm. |
lambda_mult |
float
Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. |
index_query_options |
QueryOptions
Index query option. |
create_sync
create_sync(
engine: langchain_google_alloydb_pg.engine.AlloyDBEngine,
embedding_service: langchain_core.embeddings.embeddings.Embeddings,
table_name: str,
schema_name: str = "public",
content_column: str = "content",
embedding_column: str = "embedding",
metadata_columns: typing.List[str] = [],
ignore_metadata_columns: typing.Optional[typing.List[str]] = None,
id_column: str = "langchain_id",
metadata_json_column: str = "langchain_metadata",
distance_strategy: langchain_google_alloydb_pg.indexes.DistanceStrategy = DistanceStrategy.COSINE_DISTANCE,
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
index_query_options: typing.Optional[
langchain_google_alloydb_pg.indexes.QueryOptions
] = None,
) -> langchain_google_alloydb_pg.vectorstore.AlloyDBVectorStore
Create an AlloyDBVectorStore instance.
Parameters | |
---|---|
Name | Description |
key |
object
Prevent direct constructor usage. |
engine |
AlloyDBEngine
Connection pool engine for managing connections to AlloyDB database. |
embedding_service |
Embeddings
Text embedding model to use. |
table_name |
str
Name of an existing table. |
schema_name |
str, optional
Name of the database schema. Defaults to "public". |
content_column |
str, optional
Column that represent a Document’s page_content. Defaults to "content". |
embedding_column |
str, optional
Column for embedding vectors. The embedding is generated from the document value. Defaults to "embedding". |
metadata_columns |
List[str]
Column(s) that represent a document's metadata. Defaults to an empty list. |
ignore_metadata_columns |
Optional[List[str]]
Column(s) to ignore in pre-existing tables for a document's metadata. Can not be used with metadata_columns. Defaults to None. |
id_column |
str, optional
Column that represents the Document's id. Defaults to "langchain_id". |
metadata_json_column |
str, optional
Column to store metadata as JSON. Defaults to "langchain_metadata". |
distance_strategy |
DistanceStrategy, optional
Distance strategy to use for vector similarity search. Defaults to COSINE_DISTANCE. |
k |
int, optional
Number of Documents to return from search. Defaults to 4. |
fetch_k |
int, optional
Number of Documents to fetch to pass to MMR algorithm. Defaults to 20. |
lambda_mult |
float, optional
Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. |
index_query_options |
Optional[QueryOptions], optional
Index query option. Defaults to None. |
delete
delete(
ids: typing.Optional[typing.List] = None, **kwargs: typing.Any
) -> typing.Optional[bool]
Delete records from the table.
Exceptions | |
---|---|
Type | Description |
InvalidTextRepresentationErro |
drop_vector_index
drop_vector_index(index_name: typing.Optional[str] = None) -> None
Drop the vector index.
from_documents
from_documents(
documents: typing.List[langchain_core.documents.base.Document],
embedding: langchain_core.embeddings.embeddings.Embeddings,
engine: langchain_google_alloydb_pg.engine.AlloyDBEngine,
table_name: str,
schema_name: str = "public",
ids: typing.Optional[typing.List] = None,
content_column: str = "content",
embedding_column: str = "embedding",
metadata_columns: typing.List[str] = [],
ignore_metadata_columns: typing.Optional[typing.List[str]] = None,
id_column: str = "langchain_id",
metadata_json_column: str = "langchain_metadata",
distance_strategy: langchain_google_alloydb_pg.indexes.DistanceStrategy = DistanceStrategy.COSINE_DISTANCE,
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
index_query_options: typing.Optional[
langchain_google_alloydb_pg.indexes.QueryOptions
] = None,
**kwargs: typing.Any
) -> langchain_google_alloydb_pg.vectorstore.AlloyDBVectorStore
Create an AlloyDBVectorStore instance from documents.
Parameters | |
---|---|
Name | Description |
documents |
List[Document]
Documents to add to the vector store. |
embedding |
Embeddings
Text embedding model to use. |
engine |
AlloyDBEngine
Connection pool engine for managing connections to AlloyDB database. |
table_name |
str
Name of an existing table. |
schema_name |
str, optional
Name of the database schema. Defaults to "public". |
content_column |
str, optional
Column that represent a Document’s page_content. Defaults to "content". |
embedding_column |
str, optional
Column for embedding vectors. The embedding is generated from the document value. Defaults to "embedding". |
metadata_columns |
List[str], optional
Column(s) that represent a document's metadata. Defaults to an empty list. |
ignore_metadata_columns |
Optional[List[str]], optional
Column(s) to ignore in pre-existing tables for a document's metadata. Can not be used with metadata_columns. Defaults to None. |
id_column |
str, optional
Column that represents the Document's id. Defaults to "langchain_id". |
metadata_json_column |
str, optional
Column to store metadata as JSON. Defaults to "langchain_metadata". |
distance_strategy |
DistanceStrategy
Distance strategy to use for vector similarity search. Defaults to COSINE_DISTANCE. |
k |
int
Number of Documents to return from search. Defaults to 4. |
fetch_k |
int
Number of Documents to fetch to pass to MMR algorithm. |
lambda_mult |
float
Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. |
index_query_options |
QueryOptions
Index query option. |
Exceptions | |
---|---|
Type | Description |
InvalidTextRepresentationErro |
from_texts
from_texts(
texts: typing.List[str],
embedding: langchain_core.embeddings.embeddings.Embeddings,
engine: langchain_google_alloydb_pg.engine.AlloyDBEngine,
table_name: str,
schema_name: str = "public",
metadatas: typing.Optional[typing.List[dict]] = None,
ids: typing.Optional[typing.List] = None,
content_column: str = "content",
embedding_column: str = "embedding",
metadata_columns: typing.List[str] = [],
ignore_metadata_columns: typing.Optional[typing.List[str]] = None,
id_column: str = "langchain_id",
metadata_json_column: str = "langchain_metadata",
distance_strategy: langchain_google_alloydb_pg.indexes.DistanceStrategy = DistanceStrategy.COSINE_DISTANCE,
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
index_query_options: typing.Optional[
langchain_google_alloydb_pg.indexes.QueryOptions
] = None,
**kwargs: typing.Any
) -> langchain_google_alloydb_pg.vectorstore.AlloyDBVectorStore
Create an AlloyDBVectorStore instance from texts.
Parameters | |
---|---|
Name | Description |
texts |
List[str]
Texts to add to the vector store. |
embedding |
Embeddings
Text embedding model to use. |
engine |
AlloyDBEngine
Connection pool engine for managing connections to AlloyDB database. |
table_name |
str
Name of an existing table. |
schema_name |
str, optional
Name of the database schema. Defaults to "public". |
metadatas |
Optional[List[dict]], optional
List of metadatas to add to table records. Defaults to None. |
content_column |
str, optional
Column that represent a Document’s page_content. Defaults to "content". |
embedding_column |
str, optional
Column for embedding vectors. The embedding is generated from the document value. Defaults to "embedding". |
metadata_columns |
List[str], optional
Column(s) that represent a document's metadata. Defaults to empty list. |
ignore_metadata_columns |
Optional[List[str]], optional
Column(s) to ignore in pre-existing tables for a document's metadata. Can not be used with metadata_columns. Defaults to None. |
id_column |
str, optional
Column that represents the Document's id. Defaults to "langchain_id". |
metadata_json_column |
str, optional
Column to store metadata as JSON. Defaults to "langchain_metadata". |
distance_strategy |
DistanceStrategy
Distance strategy to use for vector similarity search. Defaults to COSINE_DISTANCE. |
k |
int
Number of Documents to return from search. Defaults to 4. |
fetch_k |
int
Number of Documents to fetch to pass to MMR algorithm. |
lambda_mult |
float
Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. |
index_query_options |
QueryOptions
Index query option. |
Exceptions | |
---|---|
Type | Description |
InvalidTextRepresentationErro |
is_valid_index
is_valid_index(index_name: typing.Optional[str] = None) -> bool
Check if index exists in the table.
max_marginal_relevance_search
max_marginal_relevance_search(
query: str,
k: typing.Optional[int] = None,
fetch_k: typing.Optional[int] = None,
lambda_mult: typing.Optional[float] = None,
filter: typing.Optional[str] = None,
**kwargs: typing.Any
) -> typing.List[langchain_core.documents.base.Document]
Return docs selected using the maximal marginal relevance.
max_marginal_relevance_search_by_vector
max_marginal_relevance_search_by_vector(
embedding: typing.List[float],
k: typing.Optional[int] = None,
fetch_k: typing.Optional[int] = None,
lambda_mult: typing.Optional[float] = None,
filter: typing.Optional[str] = None,
**kwargs: typing.Any
) -> typing.List[langchain_core.documents.base.Document]
Return docs selected using the maximal marginal relevance.
max_marginal_relevance_search_with_score_by_vector
max_marginal_relevance_search_with_score_by_vector(
embedding: typing.List[float],
k: typing.Optional[int] = None,
fetch_k: typing.Optional[int] = None,
lambda_mult: typing.Optional[float] = None,
filter: typing.Optional[str] = None,
**kwargs: typing.Any
) -> typing.List[typing.Tuple[langchain_core.documents.base.Document, float]]
Return docs and distance scores selected using the maximal marginal relevance.
reindex
reindex(index_name: typing.Optional[str] = None) -> None
Re-index the vector store table.
set_maintenance_work_mem
set_maintenance_work_mem(num_leaves: int, vector_size: int) -> None
Set database maintenance work memory (for ScaNN index creation).
similarity_search
similarity_search(
query: str,
k: typing.Optional[int] = None,
filter: typing.Optional[str] = None,
**kwargs: typing.Any
) -> typing.List[langchain_core.documents.base.Document]
Return docs selected by similarity search on query.
similarity_search_by_vector
similarity_search_by_vector(
embedding: typing.List[float],
k: typing.Optional[int] = None,
filter: typing.Optional[str] = None,
**kwargs: typing.Any
) -> typing.List[langchain_core.documents.base.Document]
Return docs selected by vector similarity search.
similarity_search_image
similarity_search_image(
image_uri: str,
k: typing.Optional[int] = None,
filter: typing.Optional[str] = None,
**kwargs: typing.Any
) -> typing.List[langchain_core.documents.base.Document]
Return docs selected by similarity search on image.
similarity_search_with_score
similarity_search_with_score(
query: str,
k: typing.Optional[int] = None,
filter: typing.Optional[str] = None,
**kwargs: typing.Any
) -> typing.List[typing.Tuple[langchain_core.documents.base.Document, float]]
Return docs and distance scores selected by similarity search on query.
similarity_search_with_score_by_vector
similarity_search_with_score_by_vector(
embedding: typing.List[float],
k: typing.Optional[int] = None,
filter: typing.Optional[str] = None,
**kwargs: typing.Any
) -> typing.List[typing.Tuple[langchain_core.documents.base.Document, float]]
Return docs and distance scores selected by similarity search on vector.