The google_ml_integration
extension includes embedding functions in two different namespaces; public
and google_ml
. This page describes how to generate text embeddings using functions from these namespaces.
The embedding()
function in the public
schema can be used with any Vertex AI embedding model without registering the endpoint. If you want to pass any custom information such as the task type, register the endpoint, and then use the google_ml.embedding()
function in the google_ml
schema. For more information about registering an endpoint, see Register a model.
How embeddings work
Imagine a database running on AlloyDB with the following characteristics:
The database contains a table,
items
. Each row in this table describes an item that your business sells.The
items
table contains a column,complaints
. ThisTEXT
column stores buyer complaints logged about each item.The database integrates with the Vertex AI Model Garden, giving it access to the
gemini-embedding-001
English models.
Even though this database stores complaints about items, these complaints are stored as plain text, making them difficult to query. For example, to see which items have the most complaints from customers who received the wrong color of merchandise, then you can perform ordinary SQL queries on the table, that look for various keyword matches. However, this approach only matches rows that contain those exact keywords.
For example, a basic SQL query such as SELECT * FROM item WHERE complaints LIKE
"%wrong color%"
doesn't return a row whose complaints
field contains only
The picture shows a blue one, but the one I received was red
.
SQL queries using LLM-powered embeddings can help return semantically similar responses for such queries. By
applying embeddings, you can query the table in this example for items whose
complaints have semantic similarity to a given text prompt, such as It was the
wrong color
.
To generate embeddings, select one of the following schemas.
Before you begin
To let AlloyDB generate embeddings, do the following:
- Connect to your database using
psql
or AlloyDB for PostgreSQL Studio as thepostgres
user. - Verify that the
google_ml_integration
extension is installed. - Verify that the
google_ml_integration.enable_model_support
flag is set toon
. - Before you can generate embeddings from an AlloyDB database, you must configure AlloyDB to work with Vertex AI. For more information, see Integrate your database with Vertex AI.
Grant permissions for database users to execute the embedding function to generate embeddings:
\c DB_NAME; GRANT EXECUTE ON FUNCTION google_ml.embedding TO USER_NAME;
Replace the following:
DB_NAME: the name of the database on which the permissions should be granted
USER_NAME: the name of the user for whom the permissions should be granted
Generate embeddings
Use the google_ml.embedding()
SQL function to call text embedding models.
To call the model and generate embeddings, run the following query:
SELECT
google_ml.embedding(
model_id => 'MODEL_ID',
content => 'CONTENT');
Replace the following:
MODEL_ID
: the qualified model ID-for example,gemini-embedding-001
.CONTENT
: the text to translate into a vector embedding.
Examples for generating embeddings
Some examples for generating embeddings using registered model endpoint are listed in this section.
Gemini embedding models
To generate embeddings for a registered gemini-embedding-001
model endpoint, run the following statement:
SELECT
google_ml.embedding(
model_id => 'gemini-embedding-001',
content => 'AlloyDB is a managed, cloud-hosted SQL database service');
If your AlloyDB cluster and the Vertex AI endpoint are in different projects, then set the model_id
to the qualified path of the endpoint-for example, projects/PROJECT_ID/locations/REGION_ID/publishers/google/models/gemini-embedding-001
.
To generate embeddings for a registered gemini-embedding-001
model endpoint, run the following statement:
SELECT
google_ml.embedding(
model_id => 'gemini-embedding-001',
content => 'AlloyDB is a managed, cloud-hosted SQL database service');
OpenAI embedding model
To generate embeddings for a registered text-embedding-ada-002
model endpoint by OpenAI, run the following statement:
SELECT
google_ml.embedding(
model_id => 'text-embedding-ada-002',
content => 'e-mail spam');
To generate embeddings for a registered text-embedding-3-small
or text-embedding-3-large
model endpoints by OpenAI, run the following statement:
SELECT
google_ml.embedding(
model_id => 'text-embedding-3-small',
content => 'Vector embeddings in AI');
What's next
- Run vector similarity searches.
- Learn how to build a smart shopping assistant with AlloyDB, pgvector, and model endpoint management.
- Create indexes and query vectors.
- Learn an example embedding workflow.