This page describes some use cases for building LLM-powered applications using LlamaIndex integrated with AlloyDB Omni. Links to notebooks on GitHub are provided to help you explore approaches or to help you develop your application.
LlamaIndex is a generative AI orchestration framework that lets you connect and integrate data sources with large language models (LLMs). You can use LlamaIndex to build applications that access and query information from private or domain-specific data using natural language queries.
LlamaIndex acts as a bridge between custom data and LLMs, facilitating the development of knowledge assistant applications with retrieval-augmented generation (RAG) capabilities.
LlamaIndex is well suited for document-centric applications because it emphasizes structured document management, which simplifies indexing and retrieval. This framework features optimized query mechanisms that enhance the speed and relevance of information access, along with robust metadata handling for nuanced filtering.
For more information about the LlamaIndex framework, see the LlamaIndex product documentation.
LlamaIndex components
AlloyDB Omni offers the following LlamaIndex interfaces:
- Vector Store
- Document Store
- Index Stores
- Chat Stores
- Document Reader
Learn how to use LlamaIndex with the Quickstart for AlloyDB Omni.
Vector Store
This LlamaIndex integration lets you use the robust and scalable nature of AlloyDB Omni to store and manage your LlamaIndex data. By combining LlamaIndex's indexing and querying capabilities with AlloyDB Omni's high performance and reliability, you can build more efficient and scalable LLM-powered applications.
LlamaIndex breaks down a document —doc, text, and PDFs— into document components called nodes. The VectorStore can only contain the embedding vectors of ingested node contents and the text of nodes. A node contains text content, vector embeddings, and metadata. You can apply filters on these metadata fields to restrict node retrieval to those that match specified metadata criteria.
To work with vector stores in AlloyDB Omni, use the
AlloyDBVectorStore
class. For more information, see
LlamaIndex Vector Stores.
Store vector embeddings with the AlloyDBVectorStore class
The AlloyDB Omni notebook for vector store shows you how to do the following:
- Create an
AlloyDBEngine
to connect to your AlloyDB Omni instance usingAlloyDBEngine.from_connection_string()
. - Initialize a table to store vector embeddings.
- Create an embedding class instance using any Llama Index embeddings model.
- Initialize a default
AlloyDBVectorStore
vector store. - Create and query an index from the vector store by using VectorStoreIndex.
- Create a custom Vector Store to store and filter metadata.
- Add an ANN index to improve search latency.
Document and Index Stores
LlamaIndex Document Stores integration manages structured document storage and retrieval, optimizing for LlamaIndex document-centric capabilities. Document Store stores the content related to the vectors in the vector store.
For more information, see the LlamaIndex Document Stores product documentation.
Index Stores facilitate the management of indexes to enable rapid querying and
data retrieval, for example, summary, keyword, and Tree index. Index
in
LlamaIndex is a lightweight storage only for the node metadata. Updates to node
metadata don't require re-indexing (read embedding generation) of the full node
or all nodes in a document.
For more information, see LlamaIndex Index Stores.
Store documents and indexes
The AlloyDB Omni notebook
for Document Stores shows you how to use AlloyDB Omni to store
documents and indexes using the AlloyDBDocumentStore
and AlloyDBIndexStore
classes. You learn how to do the following:
- Create an
AlloyDBEngine
to connect to your AlloyDB Omni instance usingAlloyDBEngine.from_connection_string()
. - Create tables for the DocumentStore and IndexStore.
- Initialize a default
AlloyDBDocumentStore
. - Set up an
AlloyDBIndexStore
. - Add documents to the
Docstore
. - Use Document Stores with multiple indexes.
- Load existing indexes.
Chat Stores
Chat Stores maintain conversation history and context for chat-based applications, enabling personalized interactions. Chat Stores provide a central repository that stores and retrieves chat messages within a conversation, allowing the LLM to maintain context and provide more relevant responses based on the ongoing dialog.
Large language models are stateless by default, which means that they don't retain previous inputs unless those inputs are explicitly provided each time. By using a chat store, you can preserve the context of the conversation, which lets the model generate more relevant and coherent responses over time.
The memory module in LlamaIndex enables efficient storage and retrieval of conversational context, allowing for more personalized and context-aware interactions in chat applications. You can integrate the memory module in LlamaIndex with a ChatStore and a ChatMemoryBuffer. For more information, see LlamaIndex Chat Stores.
Store chat history
The AlloyDB Omni notebook
for Chat Stores shows you how to use AlloyDB for PostgreSQL
to store chat
history using the AlloyDBChatStore
class. You learn how to do the following:
- Create an
AlloyDBEngine
to connect to your AlloyDB Omni instance usingAlloyDBEngine.from_connection_string()
. - Initialize a default
AlloyDBChatStore
. - Create a
ChatMemoryBuffer
. - Create an LLM class instance.
- Use the
AlloyDBChatStore
without a storage context. - Use the
AlloyDBChatStore
with a storage context. - Create and use the Chat Engine.
Document Reader
Document Reader efficiently retrieves and transforms data from
AlloyDB Omni into LlamaIndex-compatible formats for indexing. The
Document Reader interface provides methods to load data from a source as
Documents
.
Document is a class that stores a piece of text and associated metadata. You can use Document Readers to load documents that you want to store in Document Stores or used to create Indexes.
For more information, see LlamaIndex Document Reader.
Retrieve data as documents
The AlloyDB Omni notebook
for Document Reader shows you how to use AlloyDB Omni to retrieve
data as documents using the AlloyDBReader
class. You learn how to do the
following:
- Create an
AlloyDBEngine
to connect to your AlloyDB Omni instance usingAlloyDBEngine.from_connection_string()
. - Create
AlloyDBReader
. - Load Documents using the
table_name
argument. - Load Documents using a SQL query.
- Set page content format.
- Load the documents.
What's next
- Build LLM-powered applications using LangChain.
- Learn how to deploy AlloyDB Omni and a local AI model on Kubernetes.