The CREATE MODEL statement for remote models over open text generation models
This document describes the CREATE MODEL
statement for creating remote models
in BigQuery over open text generation models deployed to
Vertex AI. After you create a remote model, you can
use it with the
ML.GENERATE_TEXT
function
to generate a response from the referenced Vertex AI model.
CREATE MODEL
syntax
{CREATE MODEL | CREATE MODEL IF NOT EXISTS | CREATE OR REPLACE MODEL} `project_id.dataset.model_name` REMOTE WITH CONNECTION `project_id.region.connection_id` OPTIONS(ENDPOINT = vertex_ai_https_endpoint);
CREATE MODEL
Creates and trains a new model in the specified dataset. If the model name
exists, CREATE MODEL
returns an error.
CREATE MODEL IF NOT EXISTS
Creates and trains a new model only if the model doesn't exist in the specified dataset.
CREATE OR REPLACE MODEL
Creates and trains a model and replaces an existing model with the same name in the specified dataset.
model_name
The name of the model you're creating or replacing. The model name must be unique in the dataset: no other model or table can have the same name. The model name must follow the same naming rules as a BigQuery table. A model name can:
- Contain up to 1,024 characters
- Contain letters (upper or lower case), numbers, and underscores
model_name
is not case-sensitive.
If you don't have a default project configured, then you must prepend the project ID to the model name in the following format, including backticks:
`[PROJECT_ID].[DATASET].[MODEL]`
For example, `myproject.mydataset.mymodel`.
REMOTE WITH CONNECTION
Syntax
`[PROJECT_ID].[LOCATION].[CONNECTION_ID]`
BigQuery uses a Cloud resource connection to interact with the Vertex AI endpoint.
The connection elements are as follows:
PROJECT_ID
: the project ID of the project that contains the connection.LOCATION
: the location used by the connection. The connection must be in the same location as the dataset that contains the model.CONNECTION_ID
: the connection ID—for example,myconnection
.To find your connection ID, view the connection details in the Google Cloud console. The connection ID is the value in the last section of the fully qualified connection ID that is shown in Connection ID—for example
projects/myproject/locations/connection_location/connections/myconnection
.
If you are using the remote model to analyze unstructured data from an object table, you must also grant the Vertex AI Service Agent role to the service account of the connection associated with the object table. You can find the object table's connection in the Google Cloud console, on the Details pane for the object table.
Example
`myproject.us.my_connection`
ENDPOINT
Syntax
ENDPOINT = vertex_ai_https_endpoint
Description
For vertex_ai_https_endpoint
, specify the
HTTPS endpoint
that represents a model deployed to Vertex AI.
The following example shows how to create a remote model that uses an HTTPS endpoint:
ENDPOINT = 'https://us-central1-aiplatform.googleapis.com/v1/projects/myproject/locations/us-central1/endpoints/1234'
Supported open models
You can create a remote model over a deployed open model from either the Vertex AI Model Garden or Hugging Face.
Supported Vertex AI Model Garden models
The following open models in the Vertex AI Model Garden are supported:
- Gemma 2
- Gemma
- CodeGemma
- Llama 3.3
- Llama 3.2
- Llama 3.1
- Llama Guard
- Llama 3
- Llama 2
- Llama 2 (Quantized)
- Code Llama
- Mistral Self-host (7B & Nemo)
- Mixtral
- Dolly-v2
- Falcon-instruct (PEFT)
- Phi-3
- Qwen2
- Vicuna
Supported Hugging Face models
Hugging Face models that use the Text Generation Inference API and can be deployed to Vertex AI are supported. To find supported Hugging Face models, do the following:
- Open the list of Text Generation Inference models.
- Click the name of a model that you are interested in.
- Click Deploy. If the model has a Google Cloud deployment option, then it is supported.
Deploy open models
Use the following instructions to deploy an open model to Vertex AI so that you can then reference it from a remote model.
Deploy Vertex AI Model Garden models
To deploy an open model from the Model Garden, do the following:
Go to Model Garden.
Locate the model's model card in the Model Garden and click it.
Use one of the deployment options on the model card. All models have a Deploy option for deploying directly, and an Open Notebook option for deploying the model by using a Colab Enterprise notebook. Some models also offer a Fine-Tune option for deploying a fine-tuned version of the model.
Follow the workflow of the deployment option that you have chosen. For more information, see Use models in Model Garden.
Deploy Hugging Face models
To deploy a Hugging Face model, do the following:
Go to Model Garden.
Go to the Open source models on Hugging Face section and click Show more.
For Model objective, select
Text generation
.Find and select a model to deploy.
For Deployment environment, select Vertex AI.
Optional: Specify the deployment details.
Click Deploy.
Limitations
You can only use open models to process text data. Multimodal data isn't supported.
Example
The following example creates a BigQuery ML remote model over a model deployed to a Vertex AI endpoint:
CREATE MODEL `project_id.mydataset.mymodel` REMOTE WITH CONNECTION `myproject.us.test_connection` OPTIONS(ENDPOINT = 'https://us-central1-aiplatform.googleapis.com/v1/projects/myproject/locations/us-central1/endpoints/1234')
What's next
- Try generating text using your data.
- For more information about the supported SQL statements and functions for remote models that use HTTPS endpoints, see End-to-end user journey for each model.