Use custom embeddings

If you've already created your own custom vector embeddings for your data, you can upload them to Vertex AI Search and use them when querying with Vertex AI Search.

This feature is available for data stores with structured data or unstructured data with metadata.

By default, Vertex AI Search automatically generates vector embeddings without any configuration necessary. If you aren't familiar with creating embeddings, Google recommends letting Vertex AI Search create and use embeddings for you. However, if you've created your own embeddings for your data, you might prefer to use them instead of those generated by Vertex AI Search, especially if your custom embeddings contain additional context that can enrich your search retrieval and ranking. For example:

  • Your embeddings have been trained on custom words, such as internal terms whose semantic similarity wouldn't be captured by training on public data—for example, organization-specific terms that appear only in private documents.
  • You've created embeddings for user profiles and want to use these to create a personalized, semantically-relevant document ranking. You can use your embeddings to get personalization-based ranking, which can augment Google's document embeddings for relevance-based ranking.

To bring your own embeddings:

  1. Before you begin: Check that your embeddings meet all requirements
  2. Ingest data with embeddings: Ingest your documents with their embeddings
  3. Update your schema: Update your schema with your key property fields and dimension specifications
  4. Specify your embedding: Specify your embedding either globally, or per search request.

Before you begin

Before you begin, make sure your embeddings meet the following requirements:

  • Embeddings are supported for structured data and unstructured data with metadata
  • Embeddings must be provided as one-dimensional arrays
  • Embedding dimensionality must be between 1 and 768, inclusive
  • Embeddings are supported for text and images. Videos are not supported
  • Up to two fields can be tagged as embedding key property fields. You might use two fields for cases like A/B testing for your embeddings
  • Embedding field key property designations currently cannot be removed after they are set

Ingest data with embeddings

You can ingest a document's embeddings in one to two fields included as part of that document's data or metadata during document ingestion.

To ingest data with embeddings:

  1. Prepare your data for ingestion depending on your type of data:

    • Structured data: When you prepare your data, include each document's embeddings as one-dimensional arrays in one to two fields in the document. You can provide up to two embeddings (for example, if A/B testing between embeddings). Each embedding must be provided in its own field in the document, for example: "example_embedding_vector": [0.1, 0.2, 0.3 ...]

      Follow the guidance for preparing structured data in the Prepare data for ingesting documentation.

    • Unstructured data with metadata: When you prepare your data, include each document's embedding as a one-dimensional array in a field in the document metadata. You can provide up to two embeddings (for example, when A/B testing between embeddings). Each embedding must be provided in its own field in the document metadata, for example: "example_embedding_vector": [0.1, 0.2, 0.3 ...]

      Follow the guidance for preparing unstructured data with metadata for your ingestion method (Cloud Storage or BigQuery) in the Prepare data for ingesting documentation.

  2. Follow the instructions for your data type in Create an engine and ingest data with Vertex AI Search to ingest your documents with embeddings.

Next, update your schema to use the correct embedding fields.

Update your schema

Update your schema with key property mappings and dimensions for your embedding fields using either the Google Cloud console or the API.

Console

To update your schema using the Google Cloud console, follow these steps:

  1. In the Google Cloud console, go to the Search and Conversation page.

    Search and Conversation

  2. In the navigation menu, click Data stores.

  3. In the Name column, click the data store with the schema that you want to update.

  4. Click the Schema tab to view the schema for your data.

  5. Click the Edit button.

  6. Find your embedding field in the schema and in the Key properties column, select embedding_vector as the key property for that field.

    If you have a second embedding field, repeat this step for that field.

  7. In the Dimension column, enter the number of dimensions for that embedding field.

    If you have a second embedding field, repeat this step for that field.

  8. Click Save to apply your schema changes.

    After updating your schema, it can take up to 24 hours to re-index depending on the size of your data store.

REST

To update your schema using the API, follow these steps:

  1. Following the REST instructions in Update a schema, specify the key property mapping and the number of dimensions for each embedding field:

    • "keyPropertyMapping": "embedding_vector"
    • "dimension": NUMBER_OF_DIMENSIONS

    For example, here is a formatted JSON schema with 768 dimensions for the field example_embedding_vector:

      {
        "$schema": "https://json-schema.org/draft/2020-12/schema",
        "type": "object",
        "properties": {
          "example_embedding_vector": {
            "type": "array",
            "keyPropertyMapping": 'embedding_vector',
            "dimension": 768,
            "items": {
              "type": "number"
            }
          }
        }
      }
    

    In an update schema request, the above formatted JSON would be included as a JSON string:

      "jsonSchema": "{\"$schema\":\"https://json-schema.org/draft/2020-12/schema\",\"type\":\"object\",\"properties\":{\"example_embedding_vector\":{\"type\":\"array\",\"keyPropertyMapping\":\"embedding_vector\",\"dimension\":768,\"items\":{\"type\":\"number\"}}}}"
    

    After updating your schema, it can take up to 24 hours to re-index depending on the size of your data store.

Next, specify your embedding.

Specify your embedding

After indexing from your schema update is completed, you can send search requests that include an embedding specification.

There are two ways to specify an embedding:

  • Specify an embedding globally: To specify the same embedding to all search requests, update your serving settings to include an embedding specification using either Google Cloud console or the API.
  • Specify an embedding per search request: Send embeddingSpec in each search request using the API. This overrides the global setting if it is set.

Specify an embedding globally

You can specify the same embedding across all search requests using the Google Cloud console or the API.

Console

To provide the same embedding specification to all search requests, update your serving settings with an embedding specification.

  1. In the Google Cloud console, go to the Search and Conversation page.

    Search and Conversation

  2. Click View for the data store with the schema that you want to update.

  3. Go to the Configurations page and click the Serving tab.

  4. For Embedding field path, enter the name of the field that you have mapped to the embedding key property.

  5. For Ranking expression, enter a function or functions to control the ranking of results. Variables are weighed according to the expression you enter. The ranking expression is a single function or multiple functions that are joined by + in the format function, { " + ", function }.

    Supported functions are:

    • DOUBLE * relevance_score
    • DOUBLE * dotProduct(EMBEDDING_FIELD_PATH)

    The following variables are accepted:

    • relevance_score: A predefined variable provided by Vertex AI Search to measure the relevance of a document. The score ranges from 0 to 1.0, bounds inclusive.
    • dotProduct(): A predefined function provided by Vertex AI Search. You must provide the same field name to this function as you did for embeddingVector.fieldPath.

    For example:

    • 0.3 * relevance_score
    • 0.5 * relevance_score + 0.3 * dotProduct(example_embedding_field)

REST

To provide the same embedding specification to all search requests, update servingConfig with the embeddingConfig.

  1. Patch the servingConfig entity with the fields you want to update. Specify the fields that you're updating with updateMask.

    In the following example, embeddingConfig uses embeddings in the field example_embedding_field and gives a weight of 0.5 to relevance_score.

    curl -X PATCH \
    -H "Authorization: Bearer $(gcloud auth print-access-token)" \
    -H "Content-Type: application/json; charset=utf-8" \
    -d '{
          "name": "projects/PROJECT_ID/locations/LOCATION/collections/default_collection/dataStores/DATA_STORE_ID/servingConfigs/default_search",
          "embeddingConfig": {
            "fieldPath": "example_embedding_field"
          },
          "ranking_expression": "0.5 * relevance_score"
        }' \
    'https://discoveryengine.googleapis.com/v1alpha/projects/PROJECT_ID/locations/LOCATION/collections/default_collection/dataStores/DATA_STORE_ID/servingConfigs/default_search?updateMask=embeddingConfig,rankingExpression'
    
    • fieldPath: The name of the field that you have mapped to the embedding key property.
    • ranking_expression: Controls the ranking of results. Variables are weighed according to the expression you enter. The ranking expression is a single function or multiple functions that are joined by + in the format function, { " + ", function }.

    Supported functions are:

    • DOUBLE * relevance_score
    • DOUBLE * dotProduct(EMBEDDING_FIELD_PATH)

    The following variables are accepted:

    • relevance_score: A predefined variable provided by Vertex AI Search.
    • dotProduct(): A predefined function provided by Vertex AI Search. The dot product is normalized. You must provide the same field name to this function as you did for embeddingVector.fieldPath.

    For example:

    • 0.3 * relevance_score
    • 0.5 * relevance_score + 0.3 * dotProduct(example_embedding_field)
  2. When you send search requests, servingConfig.embeddingConfig is automatically included.

    If you send a search request that explicitly includes a different embeddingSpec, it overrides servingConfig.embeddingConfig. See the Per request instructions for how to provide embedding specifications for single search requests.

Specify an embedding per search request

You can provide an embedding specification for a single search request using the API. A per-request embedding specification overrides any global embedding specification.

  1. Send a search request that includes embeddingSpec.

    The following example of embeddingSpec uses embeddings in the field example_embedding_field, specifies "Example query" as the input vector, and gives a weight of 0.5 to relevance_score and 0.3 to example_embedding_field when calculating ranking.

      "embeddingSpec": {
        "embeddingVectors": [{
          "fieldPath": "example_embedding_field",
          "vector": [
            0.96241474,
            -0.45999944,
            0.108588696
          ]
        }]
      },
      "ranking_expression": "0.5 * relevance_score + 0.3 * dotProduct(example_embedding_field)"
    
    • fieldPath: The name of the field that you have mapped to the embedding key property.
    • vector: The input vector provided as an array.
    • ranking_expression: Controls the ranking of results. Variables are weighed according to the expression you enter. The ranking expression is a single function or multiple functions that are joined by + in the format function, { " + ", function }.

      Supported functions are:

      • DOUBLE * relevance_score
      • DOUBLE * dotProduct(EMBEDDING_FIELD_PATH)

      The following variables are accepted:

      • relevance_score: A predefined variable provided by Vertex AI Search to measure the relevance of a document. The score ranges from 0 to 1.0, bounds inclusive.
      • dotProduct(): A predefined function provided by Vertex AI Search. You must provide the same field name to this function as you did for embeddingVector.fieldPath.

      For example:

      • 0.3 * relevance_score
      • 0.5 * relevance_score + 0.3 * dotProduct(example_embedding_field)
  2. Get results in the search response. Each search result includes its relevance score and dot product values. For example:

    "modelScores": {
      "dotProduct(example_embedding_field)": [0.02150772698223591],
      "relevance_score": [ 0.8828125 ]
    }
    
    • dotProduct(): The calculated dot product for the search result document.
    • relevance_score: The calculated relevance score for the search result document.

What's next