[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-03 (世界標準時間)。"],[[["\u003cp\u003eAlloyDB Omni integrates with LlamaIndex to enable the development of LLM-powered applications, providing interfaces like Vector Store, Document Store, Index Stores, Chat Stores, and Document Reader.\u003c/p\u003e\n"],["\u003cp\u003eLlamaIndex serves as a generative AI orchestration framework, connecting data sources with LLMs, and is particularly effective for document-centric applications due to its structured document management.\u003c/p\u003e\n"],["\u003cp\u003eThe Vector Store integration with AlloyDB Omni allows for efficient storage and management of LlamaIndex data, improving the scalability and performance of LLM applications by combining LlamaIndex's capabilities with AlloyDB Omni's reliability.\u003c/p\u003e\n"],["\u003cp\u003eDocument and Index Stores in LlamaIndex, when integrated with AlloyDB Omni, facilitate the structured management and rapid retrieval of documents and indexes, which is critical for optimized data handling in AI applications.\u003c/p\u003e\n"],["\u003cp\u003eChat Stores, supported by AlloyDB Omni, enable chat-based applications to maintain conversation history and context, leading to more personalized and coherent interactions by allowing LLMs to retain and utilize previous inputs.\u003c/p\u003e\n"]]],[],null,["# Build LLM-powered applications using LlamaIndex\n\nSelect a documentation version: 16.3.0keyboard_arrow_down\n\n- [Current (16.8.0)](/alloydb/omni/current/docs/ai/build-llm-apps-using-llamaindex)\n- [16.8.0](/alloydb/omni/16.8.0/docs/ai/build-llm-apps-using-llamaindex)\n- [16.3.0](/alloydb/omni/16.3.0/docs/ai/build-llm-apps-using-llamaindex)\n- [15.12.0](/alloydb/omni/15.12.0/docs/ai/build-llm-apps-using-llamaindex)\n- [15.7.1](/alloydb/omni/15.7.1/docs/ai/build-llm-apps-using-llamaindex)\n- [15.7.0](/alloydb/omni/15.7.0/docs/ai/build-llm-apps-using-llamaindex)\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n|\n| **Preview\n| --- [AlloyDB Omni](/alloydb/omni/16.3.0/docs/overview)**\n|\n|\n| This feature is subject to the \"Pre-GA Offerings Terms\" in the General Service Terms section\n| of the [Service Specific Terms](/terms/service-terms#1).\n|\n| You can process personal data for this feature as outlined in the\n| [Cloud Data Processing\n| Addendum](/terms/data-processing-addendum), subject to the obligations and restrictions described in the agreement under\n| which you access Google Cloud.\n|\n| Pre-GA features are available \"as is\" and might have limited support.\n|\n| For more information, see the\n| [launch stage descriptions](/products#product-launch-stages).\n\nThis page describes some use cases for building LLM-powered applications using\nLlamaIndex integrated with AlloyDB Omni. Links to notebooks on GitHub\nare provided to help you explore approaches or to help you develop your\napplication.\n\nLlamaIndex is a generative AI orchestration framework that lets you connect and\nintegrate data sources with large language models (LLMs). You can use LlamaIndex\nto build applications that access and query information from private or\ndomain-specific data using natural language queries.\n\nLlamaIndex acts as a bridge between custom data and LLMs, facilitating the\ndevelopment of knowledge assistant applications with retrieval-augmented\ngeneration (RAG) capabilities.\n\nLlamaIndex is well suited for document-centric applications because it\nemphasizes structured document management, which simplifies indexing and\nretrieval. This framework features optimized query mechanisms that enhance the\nspeed and relevance of information access, along with robust metadata handling\nfor nuanced filtering.\n\nFor more information about the LlamaIndex framework, see the\n[LlamaIndex product documentation](https://www.llamaindex.ai/framework).\n\nLlamaIndex components\n---------------------\n\nAlloyDB Omni offers the following LlamaIndex interfaces:\n\n- Vector Store\n- Document Store\n- Index Stores\n- Chat Stores\n- Document Reader\n\nLearn how to use LlamaIndex with the\n[Quickstart for AlloyDB Omni](https://github.com/googleapis/llama-index-alloydb-pg-python/blob/main/samples/llama_index_quick_start.ipynb).\n\nVector Store\n------------\n\nThis LlamaIndex integration lets you use the robust and scalable nature of\nAlloyDB Omni to store and manage your LlamaIndex data. By\ncombining LlamaIndex's indexing and querying capabilities with\nAlloyDB Omni's high performance and reliability, you can build\nmore efficient and scalable LLM-powered applications.\n\nLlamaIndex breaks down a document ---doc, text, and PDFs--- into document components\ncalled *nodes* . The [VectorStore](https://docs.llamaindex.ai/en/stable/module_guides/storing/vector_stores/)\ncan only contain the embedding vectors of ingested node contents and the text of\nnodes. A node contains text content, vector\nembeddings, and metadata. You can apply filters on these metadata fields to\nrestrict node retrieval to those that match specified metadata criteria.\n\nTo work with vector stores in AlloyDB Omni, use the\n`AlloyDBVectorStore` class. For more information, see\n[LlamaIndex Vector Stores](https://docs.llamaindex.ai/en/stable/module_guides/storing/vector_stores/).\n\n### Store vector embeddings with the AlloyDBVectorStore class\n\nThe\n[AlloyDB Omni notebook](https://github.com/googleapis/llama-index-alloydb-pg-python/blob/main/samples/llama_index_vector_store.ipynb)\nfor vector store shows you how to do the following:\n\n- Create an `AlloyDBEngine` to connect to your AlloyDB Omni instance using `AlloyDBEngine.from_connection_string()`.\n- Initialize a table to store vector embeddings.\n- Create an embedding class instance using any [Llama Index embeddings model](https://docs.llamaindex.ai/en/stable/module_guides/models/embeddings/).\n- Initialize a default `AlloyDBVectorStore` vector store.\n- Create and query an index from the vector store by using [VectorStoreIndex](https://docs.llamaindex.ai/en/stable/module_guides/indexing/vector_store_index/).\n- Create a custom Vector Store to store and filter metadata.\n- Add an ANN index to improve search latency.\n\nDocument and Index Stores\n-------------------------\n\nLlamaIndex Document Stores integration manages structured document storage and\nretrieval, optimizing for LlamaIndex document-centric capabilities. Document\nStore stores the content related to the vectors in the vector store.\n\nFor more information, see the\n[LlamaIndex Document Stores](https://docs.llamaindex.ai/en/stable/module_guides/storing/docstores/)\nproduct documentation.\n\nIndex Stores facilitate the management of indexes to enable rapid querying and\ndata retrieval, for example, summary, keyword, and Tree index. `Index` in\nLlamaIndex is a lightweight storage only for the node metadata. Updates to node\nmetadata don't require re-indexing (read embedding generation) of the full node\nor all nodes in a document.\n\nFor more information, see\n[LlamaIndex Index Stores](https://docs.llamaindex.ai/en/stable/module_guides/storing/index_stores/).\n\n### Store documents and indexes\n\nThe [AlloyDB Omni notebook](https://github.com/googleapis/llama-index-alloydb-pg-python/blob/main/samples/llama_index_doc_store.ipynb)\nfor Document Stores shows you how to use AlloyDB Omni to store\ndocuments and indexes using the `AlloyDBDocumentStore` and `AlloyDBIndexStore`\nclasses. You learn how to do the following:\n\n- Create an `AlloyDBEngine` to connect to your AlloyDB Omni instance using `AlloyDBEngine.from_connection_string()`.\n- Create tables for the DocumentStore and IndexStore.\n- Initialize a default `AlloyDBDocumentStore`.\n- Set up an `AlloyDBIndexStore`.\n- Add documents to the `Docstore`.\n- Use Document Stores with multiple indexes.\n- Load existing indexes.\n\nChat Stores\n-----------\n\nChat Stores maintain conversation history and context for chat-based\napplications, enabling personalized interactions. Chat Stores provide a central\nrepository that stores and retrieves chat messages within a conversation,\nallowing the LLM to maintain context and provide more relevant responses based\non the ongoing dialog.\n\nLarge language models are stateless by default, which means that they don't\nretain previous inputs unless those inputs are explicitly provided each time. By\nusing a chat store, you can preserve the context of the conversation, which lets\nthe model generate more relevant and coherent responses over time.\n\nThe memory module in LlamaIndex enables efficient storage and retrieval of\nconversational context, allowing for more personalized and context-aware\ninteractions in chat applications. You can integrate the memory module in\nLlamaIndex with a [ChatStore](https://docs.llamaindex.ai/en/stable/module_guides/storing/chat_stores/)\nand a [ChatMemoryBuffer](https://docs.llamaindex.ai/en/stable/api_reference/memory/chat_memory_buffer/).\nFor more information, see\n[LlamaIndex Chat Stores](https://docs.llamaindex.ai/en/stable/module_guides/storing/chat_stores/).\n\n### Store chat history\n\nThe [AlloyDB Omni notebook](https://github.com/googleapis/llama-index-alloydb-pg-python/blob/main/samples/llama_index_chat_store.ipynb)\nfor Chat Stores shows you how to use `AlloyDB for PostgreSQL` to store chat\nhistory using the `AlloyDBChatStore` class. You learn how to do the following:\n\n- Create an `AlloyDBEngine` to connect to your AlloyDB Omni instance using `AlloyDBEngine.from_connection_string()`.\n- Initialize a default `AlloyDBChatStore`.\n- Create a `ChatMemoryBuffer`.\n- Create an LLM class instance.\n- Use the `AlloyDBChatStore` without a storage context.\n- Use the `AlloyDBChatStore` with a storage context.\n- Create and use the Chat Engine.\n\nDocument Reader\n---------------\n\nDocument Reader efficiently retrieves and transforms data from\nAlloyDB Omni into LlamaIndex-compatible formats for indexing. The\nDocument Reader interface provides methods to load data from a source as\n`Documents`.\n\n[Document](https://docs.llamaindex.ai/en/stable/module_guides/loading/documents_and_nodes/)\nis a class that stores a piece of text and associated metadata. You can use\nDocument Readers to load documents that you want to store in Document Stores or\nused to create Indexes.\n\nFor more information, see\n[LlamaIndex Document Reader](https://docs.llamaindex.ai/en/stable/understanding/loading/loading/#using-readers-from-llamahub).\n\n### Retrieve data as documents\n\nThe [AlloyDB Omni notebook](https://github.com/googleapis/llama-index-alloydb-pg-python/blob/main/samples/llama_index_reader.ipynb)\nfor Document Reader shows you how to use AlloyDB Omni to retrieve\ndata as documents using the `AlloyDBReader` class. You learn how to do the\nfollowing:\n\n- Create an `AlloyDBEngine` to connect to your AlloyDB Omni instance using `AlloyDBEngine.from_connection_string()`.\n- Create `AlloyDBReader`.\n- Load Documents using the `table_name` argument.\n- Load Documents using a SQL query.\n- Set page content format.\n- Load the documents.\n\nWhat's next\n-----------\n\n- [Build LLM-powered applications using LangChain](/alloydb/docs/ai/langchain).\n- Learn how to [deploy AlloyDB Omni and a local AI model on Kubernetes](https://codelabs.developers.google.com/alloydb-omni-gke-embeddings#0)."]]