AlloyDBVectorStore(
key: object,
engine: AlloyDBEngine,
vs: AsyncAlloyDBVectorStore,
stores_text: bool = True,
is_embedding_query: bool = True,
)
Google AlloyDB Vector Store class
Properties
client
Get client.
Methods
AlloyDBVectorStore
AlloyDBVectorStore(
key: object,
engine: AlloyDBEngine,
vs: AsyncAlloyDBVectorStore,
stores_text: bool = True,
is_embedding_query: bool = True,
)
AlloyDBVectorStore constructor.
Parameters | |
---|---|
Name | Description |
key |
object
Prevent direct constructor usage. |
engine |
AlloyDBEngine
Connection pool engine for managing connections to Postgres database. |
vs |
AsyncAlloyDBVectorStore
The async only Vector Store implementation |
stores_text |
bool
Whether the table stores text. Defaults to "True". |
is_embedding_query |
bool
Whether the table query can have embeddings. Defaults to "True". |
Exceptions | |
---|---|
Type | Description |
Exception |
If called directly by user. |
aapply_vector_index
aapply_vector_index(
index: llama_index_alloydb_pg.indexes.BaseIndex,
name: typing.Optional[str] = None,
concurrently: bool = False,
) -> None
Create an index on the vector store table.
aclear
aclear() -> None
Asynchronously delete all nodes from the table.
add
add(
nodes: typing.Sequence[llama_index.core.schema.BaseNode], **add_kwargs: typing.Any
) -> list[str]
Synchronously add nodes to the table.
adelete
adelete(ref_doc_id: str, **delete_kwargs: typing.Any) -> None
Asynchronously delete nodes belonging to provided parent document from the table.
adelete_nodes
adelete_nodes(
node_ids: typing.Optional[list[str]] = None,
filters: typing.Optional[
llama_index.core.vector_stores.types.MetadataFilters
] = None,
**delete_kwargs: typing.Any
) -> None
Asynchronously delete a set of nodes from the table matching the provided nodes and filters.
adrop_vector_index
adrop_vector_index(index_name: typing.Optional[str] = None) -> None
Drop the vector index.
aget_nodes
aget_nodes(
node_ids: typing.Optional[list[str]] = None,
filters: typing.Optional[
llama_index.core.vector_stores.types.MetadataFilters
] = None,
) -> list[llama_index.core.schema.BaseNode]
Asynchronously get nodes from the table matching the provided nodes and filters.
ais_valid_index
ais_valid_index(index_name: typing.Optional[str] = None) -> bool
Check if index exists in the table.
apply_vector_index
apply_vector_index(
index: llama_index_alloydb_pg.indexes.BaseIndex,
name: typing.Optional[str] = None,
concurrently: bool = False,
) -> None
Create an index on the vector store table.
aquery
aquery(
query: llama_index.core.vector_stores.types.VectorStoreQuery, **kwargs: typing.Any
) -> llama_index.core.vector_stores.types.VectorStoreQueryResult
Asynchronously query vector store.
areindex
areindex(index_name: typing.Optional[str] = None) -> None
Re-index the vector store table.
aset_maintenance_work_mem
aset_maintenance_work_mem(num_leaves: int, vector_size: int) -> None
Set database maintenance work memory (for ScaNN index creation).
async_add
async_add(
nodes: typing.Sequence[llama_index.core.schema.BaseNode], **kwargs: typing.Any
) -> list[str]
Asynchronously add nodes to the table.
class_name
class_name() -> str
Get the class name, used as a unique ID in serialization.
This provides a key that makes serialization robust against actual class name changes.
clear
clear() -> None
Synchronously delete all nodes from the table.
create
create(
engine: llama_index_alloydb_pg.engine.AlloyDBEngine,
table_name: str,
schema_name: str = "public",
id_column: str = "node_id",
text_column: str = "text",
embedding_column: str = "embedding",
metadata_json_column: str = "li_metadata",
metadata_columns: list[str] = [],
ref_doc_id_column: str = "ref_doc_id",
node_column: str = "node_data",
stores_text: bool = True,
is_embedding_query: bool = True,
distance_strategy: llama_index_alloydb_pg.indexes.DistanceStrategy = DistanceStrategy.COSINE_DISTANCE,
index_query_options: typing.Optional[
llama_index_alloydb_pg.indexes.QueryOptions
] = None,
) -> llama_index_alloydb_pg.vector_store.AlloyDBVectorStore
Create an AlloyDBVectorStore instance and validates the table schema.
Parameters | |
---|---|
Name | Description |
engine |
AlloyDBEngine
Alloy DB Engine for managing connections to AlloyDB database. |
table_name |
str
Name of the existing table or the table to be created. |
schema_name |
str
Name of the database schema. Defaults to "public". |
id_column |
str
Column that represents if of a Node. Defaults to "node_id". |
text_column |
str
Column that represent text content of a Node. Defaults to "text". |
embedding_column |
str
Column for embedding vectors. The embedding is generated from the content of Node. Defaults to "embedding". |
metadata_json_column |
str
Column to store metadata as JSON. Defaults to "li_metadata". |
metadata_columns |
list[str]
Column(s) that represent extracted metadata keys in their own columns. |
ref_doc_id_column |
str
Column that represents id of a node's parent document. Defaults to "ref_doc_id". |
node_column |
str
Column that represents the whole JSON node. Defaults to "node_data". |
stores_text |
bool
Whether the table stores text. Defaults to "True". |
is_embedding_query |
bool
Whether the table query can have embeddings. Defaults to "True". |
distance_strategy |
DistanceStrategy
Distance strategy to use for vector similarity search. Defaults to COSINE_DISTANCE. |
index_query_options |
QueryOptions
Index query option. |
Exceptions | |
---|---|
Type | Description |
Exception |
If table does not exist or follow the provided structure. |
create_sync
create_sync(
engine: llama_index_alloydb_pg.engine.AlloyDBEngine,
table_name: str,
schema_name: str = "public",
id_column: str = "node_id",
text_column: str = "text",
embedding_column: str = "embedding",
metadata_json_column: str = "li_metadata",
metadata_columns: list[str] = [],
ref_doc_id_column: str = "ref_doc_id",
node_column: str = "node_data",
stores_text: bool = True,
is_embedding_query: bool = True,
distance_strategy: llama_index_alloydb_pg.indexes.DistanceStrategy = DistanceStrategy.COSINE_DISTANCE,
index_query_options: typing.Optional[
llama_index_alloydb_pg.indexes.QueryOptions
] = None,
) -> llama_index_alloydb_pg.vector_store.AlloyDBVectorStore
Create an AlloyDBVectorStore instance and validates the table schema.
Parameters | |
---|---|
Name | Description |
engine |
AlloyDBEngine
Alloy DB Engine for managing connections to AlloyDB database. |
table_name |
str
Name of the existing table or the table to be created. |
schema_name |
str
Name of the database schema. Defaults to "public". |
id_column |
str
Column that represents if of a Node. Defaults to "node_id". |
text_column |
str
Column that represent text content of a Node. Defaults to "text". |
embedding_column |
str
Column for embedding vectors. The embedding is generated from the content of Node. Defaults to "embedding". |
metadata_json_column |
str
Column to store metadata as JSON. Defaults to "li_metadata". |
metadata_columns |
list[str]
Column(s) that represent extracted metadata keys in their own columns. |
ref_doc_id_column |
str
Column that represents id of a node's parent document. Defaults to "ref_doc_id". |
node_column |
str
Column that represents the whole JSON node. Defaults to "node_data". |
stores_text |
bool
Whether the table stores text. Defaults to "True". |
is_embedding_query |
bool
Whether the table query can have embeddings. Defaults to "True". |
distance_strategy |
DistanceStrategy
Distance strategy to use for vector similarity search. Defaults to COSINE_DISTANCE. |
index_query_options |
QueryOptions
Index query option. |
Exceptions | |
---|---|
Type | Description |
Exception |
If table does not exist or follow the provided structure. |
delete
delete(ref_doc_id: str, **delete_kwargs: typing.Any) -> None
Synchronously delete nodes belonging to provided parent document from the table.
delete_nodes
delete_nodes(
node_ids: typing.Optional[list[str]] = None,
filters: typing.Optional[
llama_index.core.vector_stores.types.MetadataFilters
] = None,
**delete_kwargs: typing.Any
) -> None
Synchronously delete a set of nodes from the table matching the provided nodes and filters.
drop_vector_index
drop_vector_index(index_name: typing.Optional[str] = None) -> None
Drop the vector index.
get_nodes
get_nodes(
node_ids: typing.Optional[list[str]] = None,
filters: typing.Optional[
llama_index.core.vector_stores.types.MetadataFilters
] = None,
) -> list[llama_index.core.schema.BaseNode]
Asynchronously get nodes from the table matching the provided nodes and filters.
is_valid_index
is_valid_index(index_name: typing.Optional[str] = None) -> bool
Check if index exists in the table.
model_post_init
model_post_init(context: Any, /) -> None
This function is meant to behave like a BaseModel method to initialise private attributes.
It takes context as an argument since that's what pydantic-core passes when calling it.
query
query(
query: llama_index.core.vector_stores.types.VectorStoreQuery, **kwargs: typing.Any
) -> llama_index.core.vector_stores.types.VectorStoreQueryResult
Synchronously query vector store.
reindex
reindex(index_name: typing.Optional[str] = None) -> None
Re-index the vector store table.
set_maintenance_work_mem
set_maintenance_work_mem(num_leaves: int, vector_size: int) -> None
Set database maintenance work memory (for ScaNN index creation).