Generate embeddings

This page shows you how to use AlloyDB as a large language model (LLM) tool, generate and store vector embeddings, and perform a search based on an LLM.

AlloyDB lets you use an LLM hosted by Vertex AI to translate a text string into an embedding, which is the model's representation of the given text's semantic meaning as a numeric vector. For more information about Vertex AI support for text embeddings, see Text embeddings.

You will use the following two extensions to complete the steps in this document:

  • google_ml_integration extension: This extension is installed by default and includes functions that let you generate embeddings. The examples in this document use the general embedding() function that the google_ml_integration extension provides.

  • vector extension: This includes functions that let you store and generate indexes on your embeddings.

For more information about using ML models with AlloyDB, see Build generative AI applications.

Before you begin

To let AlloyDB generate embeddings, make sure you meet the following requirements:

Regional restrictions

You can generate embeddings in regions where Generative AI on Vertex AI is available. For a list of regions, see Generative AI on Vertex AI locations .

For AlloyDB, ensure that both the AlloyDB cluster and the Vertex AI model you are querying are in the same region.

Required database extension

  • Ensure that the google_ml_integration extension installed on your AlloyDB database.

    CREATE EXTENSION google_ml_integration;
    

    This extension is included with AlloyDB. You can install it on any database in your cluster.

  • Set the google_ml_integration.enable_model_support database flag to off.

Set up model access

Before you can generate embeddings from an AlloyDB database, you must configure AlloyDB to work with a text embedding model.

To work with the cloud-based textembeddings-gecko model, you need to integrate your database with Vertex AI.

Grant database users access to generate embeddings

Grant permission for database users to execute the embedding function to run predictions:

  1. Connect a psql client to the cluster's primary instance, as described in Connect a psql client to an instance.

  2. At the psql command prompt, connect to the database and grant permissions:

    \c DB_NAME
    
    GRANT EXECUTE ON FUNCTION embedding TO USER_NAME;
    

    Replace the following:

    • DB_NAME: the name of the database on which the permissions should be granted

    • USER_NAME: the name of the user for whom the permissions should be granted

Generate an embedding

AlloyDB provides a function that lets you translate text into a vector embedding. You can then store that embedding in your database as vector data, and optionally use pgvector functions to base queries on it.

To generate an embedding using AlloyDB, use the embedding() function provided by the google_ml_integration extension:

SELECT embedding( 'MODEL_IDVERSION_TAG', 'TEXT');

Replace the following:

  • MODEL_ID: the ID of the model to query.

    If you are using the Vertex AI Model Garden, then specify textembedding-gecko@003 as the model ID. These are the cloud-based models that AlloyDB can use for text embeddings. For more information, see Text embeddings.

  • Optional: VERSION_TAG: the version tag of the model to query. Prepend the tag with @.

    If you are using one of the textembedding-gecko English models with Vertex AI, then specify one of the version tags—for example, textembedding-gecko@003, listed in Model versions.

    Google strongly recommends that you always specify the version tag. If you don't specify the version tag, then AlloyDB always uses the latest model version, which might lead to unexpected results.

  • TEXT: the text to translate into a vector embedding.

The following example uses version 003 of the textembedding-gecko English models to generate an embedding based on a provided literal string:

SELECT embedding('textembedding-gecko@003', 'AlloyDB is a managed, cloud-hosted SQL database service.');

Store embeddings

The embeddings generated using the google_ml_integration extension are implemented as arrays of real values. These generated embeddings are passed as inputs for pgvector extension functions.

To store this value in a table, add a real[] column:

ALTER TABLE TABLE ADD COLUMN EMBEDDING_COLUMN real[DIMENSIONS];

After you create a column to store embeddings, you can populate it based on the values already stored in another column in the same table:

UPDATE TABLE SET EMBEDDING_COLUMN = embedding('MODEL_IDVERSION_TAG', SOURCE_TEXT_COLUMN);

Replace the following:

  • TABLE: the table name

  • EMBEDDING_COLUMN: the name of the embedding column

  • MODEL_ID: the ID of the model to query.

    If you are using the Vertex AI Model Garden, then specify textembedding-gecko@003 as the model ID. These are the cloud-based models that AlloyDB can use for text embeddings. For more information, see Text embeddings.

  • Optional: VERSION_TAG: the version tag of the model to query. Prepend the tag with @.

    If you are using one of the textembedding-gecko English models with Vertex AI, then specify one of the version tags—for example, textembedding-gecko@003, listed in Model versions.

    Google strongly recommends that you always specify the version tag. If you don't specify the version tag, then AlloyDB always uses the latest model version, which might lead to unexpected results.

  • SOURCE_TEXT_COLUMN: the name of the column storing the text to translate into embeddings

You can use also use the embedding() function to translate the text into a vector. You apply the vector to the pgvector nearest-neighbor operator, <->, to find the database rows with the most semantically similar embeddings.

Because embedding() returns a real array, you must explicitly cast the embedding() call to vector in order to use these values with pgvector operators.

  CREATE EXTENSION google_ml_integration VERSION '1.2';
  CREATE EXTENSION IF NOT EXISTS vector;

  SELECT * FROM TABLE
    ORDER BY EMBEDDING_COLUMN::vector
    <-> embedding('MODEL_IDVERSION_TAG', 'TEXT')
    LIMIT ROW_COUNT

Use model version tags to avoid errors

Google strongly recommends that you always use a stable version of your chosen embeddings model. For most models, this means explicitly setting a version tag.

Calling the embedding() function without specifying the version tag of the model is syntactically valid, but it is also error-prone.

If you omit the version tag when using a model in the Vertex AI Model Garden, then Vertex AI uses the latest version of the model. This might not be the latest stable version. For more information about available Vertex AI model versions, see Model versions.

A given Vertex AI model version always return the same embedding() response to given text input. If you don't specify model versions in your calls to embedding(), then a new published model version can abruptly change the returned vector for a given input, causing errors or other unexpected behavior in your applications.

To avoid these problems, always specify the model version.

Troubleshoot

ERROR: Model not found for model_id

Error message

When you try to generate an embedding using either embedding() or google_ml.embedding() function, the following error occurs:

ERROR: 'Model not found for model_id:

  • Upgrade the google_ml_integration extension and try generating embeddings again.

    ALTER EXTENSION google_ml_integration UPDATE;
    

    You can also drop the extension, and then create it again.

    DROP extension google_ml_integration;
    CREATE EXTENSION google_ml_integration;
    
  • If you are generating embeddings using the google_ml.embedding() function, then ensure that the model is registered and you are using the correct model_id in the query.

What's next