Perform semantic search and retrieval-augmented generation

This tutorial guides you through the end-to-end process of creating and using text embeddings for semantic search and retrieval-augmented generation (RAG).

This tutorial covers the following tasks:

  • Creating a BigQuery ML remote model over a Vertex AI embedding model.
  • Using the remote model with the ML.GENERATE_EMBEDDING function to generate embeddings from text in a BigQuery table.
  • Creating a vector index to index the embeddings in order to improve search performance.
  • Using the VECTOR_SEARCH function with the embeddings to search for similar text.
  • Perform RAG by generating text with the ML.GENERATE_TEXT function, and using vector search results to augment the prompt input and improve results.

This tutorial uses the BigQuery public table patents-public-data.google_patents_research.publications.

Required roles

To run this tutorial, you need the following Identity and Access Management (IAM) roles:

  • Create and use BigQuery datasets, connections, and models: BigQuery Admin (roles/bigquery.admin).
  • Grant permissions to the connection's service account: Project IAM Admin (roles/resourcemanager.projectIamAdmin).

These predefined roles contain the permissions required to perform the tasks in this document. To see the exact permissions that are required, expand the Required permissions section:

Required permissions

  • Create a dataset: bigquery.datasets.create
  • Create, delegate, and use a connection: bigquery.connections.*
  • Set the default connection: bigquery.config.*
  • Set service account permissions: resourcemanager.projects.getIamPolicy and resourcemanager.projects.setIamPolicy
  • Create a model and run inference:
    • bigquery.jobs.create
    • bigquery.models.create
    • bigquery.models.getData
    • bigquery.models.updateData
    • bigquery.models.updateMetadata

You might also be able to get these permissions with custom roles or other predefined roles.

Costs

In this document, you use the following billable components of Google Cloud:

  • BigQuery ML: You incur costs for the data that you process in BigQuery.
  • Vertex AI: You incur costs for calls to the Vertex AI service that's represented by the remote model.

To generate a cost estimate based on your projected usage, use the pricing calculator.

New Google Cloud users might be eligible for a free trial.

For more information about BigQuery pricing, see BigQuery pricing in the BigQuery documentation.

For more information about Vertex AI pricing, see the Vertex AI pricing page.

Before you begin

  1. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  2. Make sure that billing is enabled for your Google Cloud project.

  3. Enable the BigQuery, BigQuery Connection, and Vertex AI APIs.

    Enable the APIs

Create a dataset

Create a BigQuery dataset to store your ML model.

Console

  1. In the Google Cloud console, go to the BigQuery page.

    Go to the BigQuery page

  2. In the Explorer pane, click your project name.

  3. Click View actions > Create dataset.

    The Create dataset menu option.

  4. On the Create dataset page, do the following:

    • For Dataset ID, enter bqml_tutorial.

    • For Location type, select Multi-region, and then select US (multiple regions in United States).

    • Leave the remaining default settings as they are, and click Create dataset.

bq

To create a new dataset, use the bq mk command with the --location flag. For a full list of possible parameters, see the bq mk --dataset command reference.

  1. Create a dataset named bqml_tutorial with the data location set to US and a description of BigQuery ML tutorial dataset:

    bq --location=US mk -d \
     --description "BigQuery ML tutorial dataset." \
     bqml_tutorial

    Instead of using the --dataset flag, the command uses the -d shortcut. If you omit -d and --dataset, the command defaults to creating a dataset.

  2. Confirm that the dataset was created:

    bq ls

API

Call the datasets.insert method with a defined dataset resource.

{
  "datasetReference": {
     "datasetId": "bqml_tutorial"
  }
}

BigQuery DataFrames

Before trying this sample, follow the BigQuery DataFrames setup instructions in the BigQuery quickstart using BigQuery DataFrames. For more information, see the BigQuery DataFrames reference documentation.

To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up ADC for a local development environment.

import google.cloud.bigquery

bqclient = google.cloud.bigquery.Client()
bqclient.create_dataset("bqml_tutorial", exists_ok=True)

Create the remote model for text embedding generation

Create a remote model that represents a hosted Vertex AI text embedding generation model:

  1. In the Google Cloud console, go to the BigQuery page.

    Go to BigQuery

  2. In the query editor, run the following statement:

    CREATE OR REPLACE MODEL `bqml_tutorial.embedding_model`
      REMOTE WITH CONNECTION DEFAULT
      OPTIONS (ENDPOINT = 'text-embedding-005');

    The query takes several seconds to complete, after which the model embedding_model appears in the bqml_tutorial dataset in the Explorer pane. Because the query uses a CREATE MODEL statement to create a model, there are no query results.

Generate text embeddings

Generate text embeddings from patent abstracts using the ML.GENERATE_EMBEDDING function, and then write them to a BigQuery table so that they can be searched.

  1. In the Google Cloud console, go to the BigQuery page.

    Go to BigQuery

  2. In the query editor, run the following statement:

    CREATE OR REPLACE TABLE `bqml_tutorial.embeddings` AS
    SELECT * FROM ML.GENERATE_EMBEDDING(
      MODEL `bqml_tutorial.embedding_model`,
      (
        SELECT *, abstract AS content
        FROM `patents-public-data.google_patents_research.publications`
        WHERE LENGTH(abstract) > 0 AND LENGTH(title) > 0 AND country = 'Singapore'
      )
    )
    WHERE LENGTH(ml_generate_embedding_status) = 0;

This query takes approximately 5 minutes to complete.

Embedding generation using the ML.GENERATE_EMBEDDING function might fail due to Vertex AI LLM quotas or service unavailability. Error details are returned in the ml_generate_embedding_status column. An empty ml_generate_embedding_status column indicates successful embedding generation.

For alternative text embedding generation methods in BigQuery, see the Embed text with pretrained TensorFlow models tutorial.

Create a vector index

If you create a vector index on an embedding column, a vector search performed on that column uses the Approximate Nearest Neighbor search technique. This technique improves vector search performance, with the trade-off of reducing recall and so returning more approximate results.

To create a vector index, use the CREATE VECTOR INDEX data definition language (DDL) statement:

  1. Go to the BigQuery page.

    Go to BigQuery

  2. In the query editor, run the following SQL statement:

    CREATE OR REPLACE VECTOR INDEX my_index
    ON `bqml_tutorial.embeddings`(ml_generate_embedding_result)
    OPTIONS(index_type = 'IVF',
      distance_type = 'COSINE',
      ivf_options = '{"num_lists":500}')

Creating a vector index typically takes only a few seconds. It takes another 2 or 3 minutes for the vector index to be populated and ready to use.

Verify vector index readiness

The vector index is populated asynchronously. You can check whether the index is ready to be used by querying the INFORMATION_SCHEMA.VECTOR_INDEXES view and verifying that the coverage_percentage column value is greater than 0 and the last_refresh_time column value isn't NULL.

  1. Go to the BigQuery page.

    Go to BigQuery

  2. In the query editor, run the following SQL statement:

    SELECT table_name, index_name, index_status,
    coverage_percentage, last_refresh_time, disable_reason
    FROM `PROJECT_ID.bqml_tutorial.INFORMATION_SCHEMA.VECTOR_INDEXES`

    Replace PROJECT_ID with your project ID.

Perform a text similarity search using the vector index

Use the VECTOR_SEARCH function to search for relevant patents that match embeddings generated from a text query.

The top_k argument determines the number of matches to return, in this case five. The fraction_lists_to_search option determines the percentage of vector index lists to search. The vector index you created has 500 lists, so the fraction_lists_to_search value of .01 indicates that this vector search scans five of those lists. A lower fraction_lists_to_search value as shown here provides lower recall and faster performance. For more information about vector index lists, see the num_lists vector index option.

The model you use to generate the embeddings in this query must be the same as the one you use to generate the embeddings in the table you are comparing against, otherwise the search results won't be accurate.

  1. Go to the BigQuery page.

    Go to BigQuery

  2. In the query editor, run the following SQL statement:

    SELECT query.query, base.publication_number, base.title, base.abstract
    FROM VECTOR_SEARCH(
      TABLE `bqml_tutorial.embeddings`, 'ml_generate_embedding_result',
      (
      SELECT ml_generate_embedding_result, content AS query
      FROM ML.GENERATE_EMBEDDING(
      MODEL `bqml_tutorial.embedding_model`,
      (SELECT 'improving password security' AS content))
      ),
      top_k => 5, options => '{"fraction_lists_to_search": 0.01}')

    The output is similar to the following:

    +-----------------------------+--------------------+-------------------------------------------------+-------------------------------------------------+
    |            query            | publication_number |                       title                     |                      abstract                   |
    +-----------------------------+--------------------+-------------------------------------------------+-------------------------------------------------+
    | improving password security | SG-120868-A1       | Data storage device security method and a...    | Methods for improving security in data stora... |
    | improving password security | SG-10201610585W-A  | Passsword management system and process...      | PASSSWORD MANAGEMENT SYSTEM AND PROCESS ...     |
    | improving password security | SG-148888-A1       | Improved system and method for...               | IMPROVED SYSTEM AND METHOD FOR RANDOM...        |
    | improving password security | SG-194267-A1       | Method and system for protecting a password...  | A system for providing security for a...        |
    | improving password security | SG-120868-A1       | Data storage device security...                 | Methods for improving security in data...       |
    +-----------------------------+--------------------+-------------------------------------------------+-------------------------------------------------+
    

Create the remote model for text generation

Create a remote model that represents a hosted Vertex AI text generation model:

  1. In the Google Cloud console, go to the BigQuery page.

    Go to BigQuery

  2. In the query editor, run the following statement:

    CREATE OR REPLACE MODEL `bqml_tutorial.text_model`
      REMOTE WITH CONNECTION DEFAULT
      OPTIONS (ENDPOINT = 'gemini-1.5-flash-002');

    The query takes several seconds to complete, after which the model text_model appears in the bqml_tutorial dataset in the Explorer pane. Because the query uses a CREATE MODEL statement to create a model, there are no query results.

Generate text augmented by vector search results

Feed the search results as prompts to generate text with the ML.GENERATE_TEXT function

  1. In the Google Cloud console, go to the BigQuery page.

    Go to BigQuery

  2. In the query editor, run the following statement:

    SELECT ml_generate_text_llm_result AS generated, prompt
    FROM ML.GENERATE_TEXT(
      MODEL `bqml_tutorial.text_model`,
      (
        SELECT CONCAT(
          'Propose some project ideas to improve user password security using the context below: ',
          STRING_AGG(
            FORMAT("patent title: %s, patent abstract: %s", base.title, base.abstract),
            ',\n')
          ) AS prompt,
        FROM VECTOR_SEARCH(
          TABLE `bqml_tutorial.embeddings`, 'ml_generate_embedding_result',
          (
            SELECT ml_generate_embedding_result, content AS query
            FROM ML.GENERATE_EMBEDDING(
              MODEL `bqml_tutorial.embedding_model`,
             (SELECT 'improving password security' AS content)
            )
          ),
        top_k => 5, options => '{"fraction_lists_to_search": 0.01}')
      ),
      STRUCT(600 AS max_output_tokens, TRUE AS flatten_json_output));

    The output is similar to the following:

    +------------------------------------------------+------------------------------------------------------------+
    |            generated                           | prompt                                                     |
    +------------------------------------------------+------------------------------------------------------------+
    | These patents suggest several project ideas to | Propose some project ideas to improve user password        |
    | improve user password security.  Here are      | security using the context below: patent title: Active     |
    | some, categorized by the patent they build     | new password entry dialog with compact visual indication   |
    | upon:                                          | of adherence to password policy, patent abstract:          |
    |                                                | An active new password entry dialog provides a compact     |
    | **I. Projects based on "Active new password    | visual indication of adherence to password policies. A     |
    | entry dialog with compact visual indication of | visual indication of progress towards meeting all          |
    | adherence to password policy":**               | applicable password policies is included in the display    |
    |                                                | and updated as new password characters are being...        |
    +------------------------------------------------+------------------------------------------------------------+
     

Clean up

  1. In the Google Cloud console, go to the Manage resources page.

    Go to Manage resources

  2. In the project list, select the project that you want to delete, and then click Delete.
  3. In the dialog, type the project ID, and then click Shut down to delete the project.

What's next