Generative AI overview

This document describes the generative artificial intelligence (AI) features that BigQuery ML supports. These features let you perform AI tasks in BigQuery ML by using pre-trained Vertex AI models. Supported tasks include the following:

You access a Vertex AI model to perform one of these functions by creating a remote model in BigQuery ML that represents the Vertex AI model's endpoint. Once you have created a remote model over the Vertex AI model that you want to use, you access that model's capabilities by running a BigQuery ML function against the remote model.

This approach lets you use the capabilities of these Vertex AI models in SQL queries to analyze BigQuery data.

Workflow

You can use remote models over Vertex AI models and remote models over Cloud AI services together with BigQuery ML functions in order to accomplish complex data analysis and generative AI tasks.

The following diagram shows some typical workflows where you might use these capabilities together:

Diagram showing common workflows for remote models that use Vertex AI models or Cloud AI services.

Text generation

Text generation is a form of generative AI in which text is generated based on either a prompt or on analysis of data. You can perform text generation using both text and multimodal data.

Some common use cases for text generation are as follows:

  • Generating creative content.
  • Generating code.
  • Generating chat or email responses.
  • Brainstorming, such as suggesting avenues for future products or services.
  • Content personalization, such as product suggestions.
  • Classifying data by applying one or more labels to the content to sort it into categories.
  • Identifying the key sentiments expressed in the content.
  • Summarizing the key ideas or impressions conveyed by the content.
  • Identifying one or more prominent entities in text or visual data.
  • Translating the content of text or audio data to a different language.
  • Generating text that matches the verbal content in audio data.
  • Captioning or performing Q&A on visual data.

Data enrichment is a common next step after text generation, in which you enrich insights from the initial analysis by combining them with additional data. For example, you might analyze images of home furnishings to generate text for a design_type column, so that the furnishings SKU is has an associated description, such as mid-century modern or farmhouse.

Supported models

The following Vertex AI models are supported:

To provide feedback or request support for the models in preview, send an email to bqml-feedback@google.com.

Using text generation models

After you create the model, you can use the ML.GENERATE_TEXT function to interact with that model:

  • For remote models based on the Gemini 1.5 or 2.0 models, you can use the ML.GENERATE_TEXT function to analyze text, image, audio, video, or PDF content from an object table with a prompt you provide as a function argument, or you can generate text from a prompt you provide in a query or from a column in a standard table.
  • For remote models based on the gemini-1.0-pro-vision model, you can use the ML.GENERATE_TEXT function to analyze image or video content from an object table with a prompt you provide as a function argument.
  • For remote models based on gemini-1.0-pro, text-bison, text-bison-32k, or text-unicorn models, you can use the ML.GENERATE_TEXT function with a prompt you provide in a query or from a column in a standard table.

You can use grounding and safety attributes when you use Gemini models with the ML.GENERATE_TEXT function, provided that you are using a standard table for input. Grounding lets the Gemini model use additional information from the internet to generate more specific and factual responses. Safety attributes let the Gemini model filter the responses it returns based on the attributes you specify.

When you create a remote model that references any of the following models, you can optionally choose to configure supervised tuning at the same time:

  • gemini-1.5-pro-002
  • gemini-1.5-flash-002
  • gemini-1.0-pro-002 (Preview)

All inference occurs in Vertex AI. The results are stored in BigQuery.

Use the following topics to try text generation in BigQuery ML:

Embedding generation

An embedding is a high-dimensional numerical vector that represents a given entity, like a piece of text or an audio file. Generating embeddings lets you capture the semantics of your data in a way that makes it easier to reason about and compare the data.

Some common use cases for embedding generation are as follows:

  • Using retrieval-augmented generation (RAG) to augment model responses to user queries by referencing additional data from a trusted source. RAG provides better factual accuracy and response consistency, and also provides access to data that is newer than the model's training data.
  • Performing multimodal search. For example, using text input to search images.
  • Performing semantic search to find similar items for recommendations, substitution, and record deduplication.
  • Creating embeddings to use with a k-means model for clustering.

Supported models

The following models are supported:

For a smaller, lightweight text embedding, try using a pretrained TensorFlow model, such as NNLM, SWIVEL, or BERT.

Using embedding generation models

After you create the model, you can use the ML.GENERATE_EMBEDDING function to interact with it. For all types of supported models, ML.GENERATE_EMBEDDING works with data in standard tables. For multimodal embedding models, ML.GENERATE_EMBEDDING also works with visual content in object tables.

For remote models, all inference occurs in Vertex AI. For other model types, all inference occurs in BigQuery. The results are stored in BigQuery.

Use the following topics to try text generation in BigQuery ML:

What's next