Generate text by using the ML.GENERATE_TEXT function
This tutorial shows you how to create a BigQuery ML
remote model
that references a Vertex AI natural language
foundation model.
You can then use that model in conjunction with the
ML.GENERATE_TEXT
function
to analyze text in a BigQuery table.
Required permissions
To create a connection, you need membership in the following Identity and Access Management (IAM) role:
roles/bigquery.connectionAdmin
To grant permissions to the connection's service account, you need the following permission:
resourcemanager.projects.setIamPolicy
To create the model using BigQuery ML, you need the following IAM permissions:
bigquery.jobs.create
bigquery.models.create
bigquery.models.getData
bigquery.models.updateData
bigquery.models.updateMetadata
To run inference, you need the following permissions:
bigquery.tables.getData
on the tablebigquery.models.getData
on the modelbigquery.jobs.create
Before you begin
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project.
-
Enable the BigQuery, BigQuery Connection, and Vertex AI APIs.
Create a connection
Create a Cloud resource connection and get the connection's service account.
Select one of the following options:
Console
Go to the BigQuery page.
To create a connection, click
Add data, and then click Connections to external data sources.In the Connection type list, select BigLake and remote functions (Cloud Resource).
In the Connection ID field, enter a name for your connection.
Click Create connection.
Click Go to connection.
In the Connection info pane, copy the service account ID for use in a later step.
bq
In a command-line environment, create a connection:
bq mk --connection --location=REGION --project_id=PROJECT_ID \ --connection_type=CLOUD_RESOURCE CONNECTION_ID
The
--project_id
parameter overrides the default project.Replace the following:
REGION
: your connection regionPROJECT_ID
: your Google Cloud project IDCONNECTION_ID
: an ID for your connection
When you create a connection resource, BigQuery creates a unique system service account and associates it with the connection.
Retrieve and copy the service account ID because you need it in a later step:
bq show --connection PROJECT_ID.REGION.CONNECTION_ID
The output is similar to the following:
name properties 1234.REGION.CONNECTION_ID {"serviceAccountId": "connection-1234-9u56h9@gcp-sa-bigquery-condel.iam.gserviceaccount.com"}
Terraform
Append the following section into your main.tf
file.
## This creates a cloud resource connection. ## Note: The cloud resource nested object has only one output only field - serviceAccountId. resource "google_bigquery_connection" "connection" { connection_id = "CONNECTION_ID" project = "PROJECT_ID" location = "REGION" cloud_resource {} }Replace the following:
CONNECTION_ID
: an ID for your connectionPROJECT_ID
: your Google Cloud project IDREGION
: your connection region
If you get the following connection error, update the Google Cloud SDK:
Flags parsing error: flag --connection_type=CLOUD_RESOURCE: value should be one of...
Give the service account access
Give your service account permission to use the connection. Failure to give permission results in an error. Select one of the following options:
Console
Go to the IAM & Admin page.
Click
Add.The Add principals dialog opens.
In the New principals field, enter the service account ID that you copied earlier.
In the Select a role field, select Vertex AI, and then select Vertex AI User.
Click Save.
gcloud
Use the
gcloud projects add-iam-policy-binding
command:
gcloud projects add-iam-policy-binding 'PROJECT_NUMBER' --member='serviceAccount:MEMBER' --role='roles/aiplatform.user' --condition=None
Replace the following:
PROJECT_NUMBER
: your project numberMEMBER
: the service account ID that you copied earlier
Create a model
In the Google Cloud console, go to the BigQuery page.
Using the SQL editor, create a remote model:
CREATE OR REPLACE MODEL `PROJECT_ID.DATASET_ID.MODEL_NAME` REMOTE WITH CONNECTION `PROJECT_ID.REGION.CONNECTION_ID` OPTIONS (ENDPOINT = 'ENDPOINT');
Replace the following:
PROJECT_ID
: your project IDDATASET_ID
: the ID of the dataset to contain the model. This dataset must be in the same location as the connection that you are usingMODEL_NAME
: the name of the modelREGION
: the region used by the connectionCONNECTION_ID
: the ID of your BigQuery connectionWhen you view the connection details in the Google Cloud console, this is the value in the last section of the fully qualified connection ID that is shown in Connection ID, for example
projects/myproject/locations/connection_location/connections/myconnection
ENDPOINT
: the text LLM to use. For example,ENDPOINT='text-bison-32k'
.You can specify a particular version of the model by appending
@version
to the model name. For example,text-bison@001
. If you don't specify a version, the latest version of the model is used.
Generate text
Generate text with the
ML.GENERATE_TEXT
function:
Prompt column
Generate text by using a table column to provide the prompt.
SELECT * FROM ML.GENERATE_TEXT( MODEL `PROJECT_ID.DATASET_ID.MODEL_NAME`, TABLE PROJECT_ID.DATASET_ID.TABLE_NAME, STRUCT(TOKENS AS max_output_tokens, TEMPERATURE AS temperature, TOP_K AS top_k, TOP_P AS top_p, FLATTEN_JSON AS flatten_json_output) );
Replace the following:
PROJECT_ID
: your project ID.DATASET_ID
: the ID of the dataset that contains the model.MODEL_NAME
: the name of the model.TABLE_NAME
: the name of the table that contains the prompt. This table must have a table that's namedprompt
, or you can use an alias to use a differently named column.TOKENS
: anINT64
value in the range[1,1024]
that sets the maximum number of tokens output by the model. Specify a lower value for shorter responses and a higher value for longer responses. The default is50
.TEMPERATURE
: aFLOAT64
value in the range[0.0,1.0]
that controls the degree of randomness in token selection. The default is1.0
.Lower values for
temperature
are good for prompts that require a more deterministic and less open-ended or creative response, while higher values fortemperature
can lead to more diverse or creative results. A value of0
fortemperature
is deterministic, meaning that the highest probability response is always selected.TOP_K
: anINT64
value in the range[1,40]
that determines the initial pool of tokens the model considers for selection. Specify a lower value for less random responses and a higher value for more random responses. The default is40
.TOP_P
: aFLOAT64
value in the range[0.0,1.0]
helps determine which tokens from the pool determined byTOP_K
are selected. Specify a lower value for less random responses and a higher value for more random responses. The default is1.0
.FLATTEN_JSON
: aBOOL
value that determines whether to return the generated text and the safety attributes in separate columns. The default isFALSE
.
The following example shows a request with these characteristics:
- Uses the
prompt
column of theprompts
table for the prompt. - Returns a short and moderately probable response.
- Returns the generated text and the safety attributes in separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.llm_model`, TABLE mydataset.prompts, STRUCT( 0.4 AS temperature, 100 AS max_output_tokens, 0.5 AS top_p, 40 AS top_k, TRUE AS flatten_json_output));
Prompt query
Generate text by using a query to provide the prompt.
SELECT * FROM ML.GENERATE_TEXT( MODEL `PROJECT_ID.DATASET_ID.MODEL_NAME`, (PROMPT_QUERY), STRUCT(TOKENS AS max_output_tokens, TEMPERATURE AS temperature, TOP_K AS top_k, TOP_P AS top_p, FLATTEN_JSON AS flatten_json_output) );
Replace the following:
PROJECT_ID
: your project ID.DATASET_ID
: the ID of the dataset that contains the model.MODEL_NAME
: the name of the model.PROMPT_QUERY
: a query that provides the prompt data.TOKENS
: anINT64
value in the range[1,1024]
that sets the maximum number of tokens output by the model. Specify a lower value for shorter responses and a higher value for longer responses. The default is50
.TEMPERATURE
: aFLOAT64
value in the range[0.0,1.0]
that controls the degree of randomness in token selection. The default is1.0
.Lower values for
temperature
are good for prompts that require a more deterministic and less open-ended or creative response, while higher values fortemperature
can lead to more diverse or creative results. A value of0
fortemperature
is deterministic, meaning that the highest probability response is always selected.TOP_K
: anINT64
value in the range[1,40]
that determines the initial pool of tokens the model considers for selection. Specify a lower value for less random responses and a higher value for more random responses. The default is40
.TOP_P
: aFLOAT64
value in the range[0.0,1.0]
helps determine which tokens from the pool determined byTOP_K
are selected. Specify a lower value for less random responses and a higher value for more random responses. The default is1.0
.FLATTEN_JSON
: aBOOL
value that determines whether to return the generated text and the safety attributes in separate columns.
Example 1
The following example shows a request with these characteristics:
- Prompts for a summary of the text in the
body
column of thearticles
table. - Returns a moderately long and more probable response.
- Returns the generated text and the safety attributes in separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.llm_model`, ( SELECT CONCAT('Summarize this text', body) AS prompt FROM mydataset.articles ), STRUCT( 0.2 AS temperature, 650 AS max_output_tokens, 0.2 AS top_p, 15 AS top_k, TRUE AS flatten_json_output));
Example 2
The following example shows a request with these characteristics:
- Uses a query to create the prompt data by concatenating strings that provide prompt prefixes with table columns.
- Returns a short and moderately probable response.
- Doesn't return the generated text and the safety attributes in separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.llm_model`, ( SELECT CONCAT(question, 'Text:', description, 'Category') AS prompt FROM mydataset.input_table ), STRUCT( 0.4 AS temperature, 100 AS max_output_tokens, 0.5 AS top_p, 30 AS top_k, FALSE AS flatten_json_output));