This tutorial describes how to migrate data from a third-party vector database to AlloyDB for PostgreSQL using LangChain vector stores. The following vector databases are supported:
This tutorial assumes that you're familiar with Google Cloud, AlloyDB, and asynchronous Python programming.
Objectives
This tutorial shows you how to do the following:
- Extract data from an existing vector database.
- Connect to AlloyDB.
- Initialize the AlloyDB table.
- Initialize a vector store object.
- Run the migration script to insert the data.
Costs
In this document, you use the following billable components of Google Cloud:
To generate a cost estimate based on your projected usage,
use the pricing calculator.
When you finish the tasks that are described in this document, you can avoid continued billing by deleting the resources that you created. For more information, see Clean up.
Before you begin
Make sure that you have one of the following LangChain third-party database vector stores:
Enable billing and required APIs
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Make sure that billing is enabled for your Google Cloud project.
Enable the Cloud APIs necessary to create and connect to AlloyDB for PostgreSQL.
- In the Confirm project step, click Next to confirm the name of the project you are going to make changes to.
In the Enable APIs step, click Enable to enable the following:
- AlloyDB API
- Compute Engine API
- Service Networking API
Required roles
To get the permissions that you need to complete the tasks in this tutorial, have the following Identity and Access Management (IAM) roles which allow for table creation and data insertion:
- Owner (
roles/owner
) or Editor (roles/editor
) If the user is not an owner or editor, the following IAM roles and PostgreSQL privileges are required:
- AlloyDB Instance Client (
roles/alloydb.client
) - Cloud AlloyDB Admin (
roles/alloydb.admin
) - Compute Network User (
roles/compute.networkUser
)
- AlloyDB Instance Client (
If you want to authenticate to your database using IAM
authentication instead of using the built-in authentication in this tutorial, use
the notebook that shows how to
use AlloyDB for PostgreSQL to store vector embeddings with the AlloyDBVectorStore
class.
Create an AlloyDB cluster and user
- Create an AlloyDB cluster and an instance.
- Enable Public IP to run this tutorial from anywhere. If you're using Private IP, you must run this tutorial from within your VPC.
- Create or select an AlloyDB database user.
- When you create the instance, a
postgres
user is created with a password. This user has superuser permissions. - This tutorial uses built-in authentication to reduce any authentication friction. IAM authentication is possible using the AlloyDBEngine.
- When you create the instance, a
Retrieve the code sample
Copy the code sample from GitHub by cloning the repository:
git clone https://github.com/googleapis/langchain-google-alloydb-pg-python.git
Navigate to the
migrations
directory:cd langchain-google-alloydb-pg-python/samples/migrations
Extract data from an existing vector database
Create a client.
Pinecone
Weaviate
Chroma
Qdrant
Milvus
Get all the data from the database.
Pinecone
Retrieve vector IDs from the Pinecone index:
And then fetch records by ID from the Pinecone index:
Weaviate
Chroma
Qdrant
Milvus
Initialize the AlloyDB table
Define the embedding service.
The VectorStore interface requires an embedding service. This workflow doesn't generate new embeddings, so the
FakeEmbeddings
class is used to avoid any costs.Pinecone
Weaviate
Chroma
Qdrant
Milvus
Prepare the AlloyDB table.
Connect to AlloyDB using a public IP connection. For more information, see Specifying IP Address Type.
Pinecone
Weaviate
Chroma
Qdrant
Milvus
Create a table to copy data into, if it doesn't already exist.
Pinecone
Weaviate
Chroma
Qdrant
Milvus
Initialize a vector store object
This code adds additional vector embedding metadata to the langchain_metadata
column in a JSON format.
To make filtering more efficient, organize this metadata into separate columns.
For more information, see Create a custom Vector Store.
To initialize a vector store object, run the following command:
Pinecone
Weaviate
Chroma
Qdrant
Milvus
Insert data into the AlloyDB table:
Pinecone
Weaviate
Chroma
Qdrant
Milvus
Run the migration script
Install the sample dependencies:
pip install -r requirements.txt
Run the sample migration.
Pinecone
python migrate_pinecone_vectorstore_to_alloydb.py
Make the following replacements before you run the sample:
PINECONE_API_KEY
: the Pinecone API key.PINECONE_NAMESPACE
: the Pinecone namespace.PINECONE_INDEX_NAME
: the name of the Pinecone index.PROJECT_ID
: the project ID.REGION
: the region in which the AlloyDB cluster is deployed.CLUSTER
: the name of the cluster.INSTANCE
: the name of the instance.DB_NAME
: the name of the database.DB_USER
: the name of the database user.DB_PWD
: the database secret password.
Weaviate
python migrate_weaviate_vectorstore_to_alloydb.py
Make the following replacements before you run the sample:
WEAVIATE_API_KEY
: the Weaviate API key.WEAVIATE_CLUSTER_URL
: the Weaviate cluster URL.WEAVIATE_COLLECTION_NAME
: the Weaviate collection name.PROJECT_ID
: the project ID.REGION
: the region in which the AlloyDB cluster is deployed.CLUSTER
: the name of the cluster.INSTANCE
: the name of the instance.DB_NAME
: the name of the database.DB_USER
: the name of the database user.DB_PWD
: the database secret password.
Chroma
python migrate_chromadb_vectorstore_to_alloydb.py
Make the following replacements before you run the sample:
CHROMADB_PATH
: the Chroma database path.CHROMADB_COLLECTION_NAME
: the name of the Chroma database collection.PROJECT_ID
: the project ID.REGION
: the region in which the AlloyDB cluster is deployed.CLUSTER
: the name of the cluster.INSTANCE
: the name of the instance.DB_NAME
: the name of the database.DB_USER
: the name of the database user.DB_PWD
: the database secret password.
Qdrant
python migrate_qdrant_vectorstore_to_alloydb.py
Make the following replacements before you run the sample:
QDRANT_PATH
: the Qdrant database path.QDRANT_COLLECTION_NAME
: the name of the Qdrant collection name.PROJECT_ID
: the project ID.REGION
: the region in which the AlloyDB cluster is deployed.CLUSTER
: the name of the cluster.INSTANCE
: the name of the instance.DB_NAME
: the name of the database.DB_USER
: the name of the database user.DB_PWD
: the database secret password.
Milvus
python migrate_milvus_vectorstore_to_alloydb.py
Make the following replacements before you run the sample:
MILVUS_URI
: the Milvus URI.MILVUS_COLLECTION_NAME
: the name of the Milvus collection.PROJECT_ID
: the project ID.REGION
: the region in which the AlloyDB cluster is deployed.CLUSTER
: the name of the cluster.INSTANCE
: the name of the instance.DB_NAME
: the name of the database.DB_USER
: the name of the database user.DB_PWD
: the database secret password.
A successful migration prints logs similar to the following without any errors:
Migration completed, inserted all the batches of data to AlloyDB
Open AlloyDB Studio to view your migrated data. For more information, see Manage your data using AlloyDB Studio.
Clean up
To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.
In the Google Cloud console, go to the Clusters page.
In the Resource name column, click the name of the cluster that you created.
Click delete Delete cluster.
In Delete cluster, enter the name of your cluster to confirm that you want to delete your cluster.
Click Delete.
If you created a private connection when you created a cluster, delete the private connection:
Go to the Google Cloud console Networking page and click Delete VPC network.