Generate text by using the ML.GENERATE_TEXT function
This document shows you how to create a BigQuery ML
remote model
that represents a hosted Vertex AI model. The hosted
Vertex AI model can be a
built-in Vertex AI text or multimodal model,
or an
Anthropic Claude model.
Depending on the Vertex AI model that you choose, you can then
use the
ML.GENERATE_TEXT
function
to analyze unstructured data in
object tables or text in
standard tables.
Required permissions
To create a connection, you need membership in the following Identity and Access Management (IAM) role:
roles/bigquery.connectionAdmin
To grant permissions to the connection's service account, you need the following permission:
resourcemanager.projects.setIamPolicy
To create the model using BigQuery ML, you need the following IAM permissions:
bigquery.jobs.create
bigquery.models.create
bigquery.models.getData
bigquery.models.updateData
bigquery.models.updateMetadata
To run inference, you need the following permissions:
bigquery.tables.getData
on the tablebigquery.models.getData
on the modelbigquery.jobs.create
Before you begin
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project.
-
Enable the BigQuery, BigQuery Connection, and Vertex AI APIs.
Create a connection
Create a Cloud resource connection and get the connection's service account.
Select one of the following options:
Console
Go to the BigQuery page.
To create a connection, click
Add, and then click Connections to external data sources.In the Connection type list, select Vertex AI remote models, remote functions and BigLake (Cloud Resource).
In the Connection ID field, enter a name for your connection.
Click Create connection.
Click Go to connection.
In the Connection info pane, copy the service account ID for use in a later step.
bq
In a command-line environment, create a connection:
bq mk --connection --location=REGION --project_id=PROJECT_ID \ --connection_type=CLOUD_RESOURCE CONNECTION_ID
The
--project_id
parameter overrides the default project.Replace the following:
REGION
: your connection regionPROJECT_ID
: your Google Cloud project IDCONNECTION_ID
: an ID for your connection
When you create a connection resource, BigQuery creates a unique system service account and associates it with the connection.
Troubleshooting: If you get the following connection error, update the Google Cloud SDK:
Flags parsing error: flag --connection_type=CLOUD_RESOURCE: value should be one of...
Retrieve and copy the service account ID for use in a later step:
bq show --connection PROJECT_ID.REGION.CONNECTION_ID
The output is similar to the following:
name properties 1234.REGION.CONNECTION_ID {"serviceAccountId": "connection-1234-9u56h9@gcp-sa-bigquery-condel.iam.gserviceaccount.com"}
Terraform
Use the
google_bigquery_connection
resource.
To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.
The following example creates a Cloud resource connection named
my_cloud_resource_connection
in the US
region:
To apply your Terraform configuration in a Google Cloud project, complete the steps in the following sections.
Prepare Cloud Shell
- Launch Cloud Shell.
-
Set the default Google Cloud project where you want to apply your Terraform configurations.
You only need to run this command once per project, and you can run it in any directory.
export GOOGLE_CLOUD_PROJECT=PROJECT_ID
Environment variables are overridden if you set explicit values in the Terraform configuration file.
Prepare the directory
Each Terraform configuration file must have its own directory (also called a root module).
-
In Cloud Shell, create a directory and a new
file within that directory. The filename must have the
.tf
extension—for examplemain.tf
. In this tutorial, the file is referred to asmain.tf
.mkdir DIRECTORY && cd DIRECTORY && touch main.tf
-
If you are following a tutorial, you can copy the sample code in each section or step.
Copy the sample code into the newly created
main.tf
.Optionally, copy the code from GitHub. This is recommended when the Terraform snippet is part of an end-to-end solution.
- Review and modify the sample parameters to apply to your environment.
- Save your changes.
-
Initialize Terraform. You only need to do this once per directory.
terraform init
Optionally, to use the latest Google provider version, include the
-upgrade
option:terraform init -upgrade
Apply the changes
-
Review the configuration and verify that the resources that Terraform is going to create or
update match your expectations:
terraform plan
Make corrections to the configuration as necessary.
-
Apply the Terraform configuration by running the following command and entering
yes
at the prompt:terraform apply
Wait until Terraform displays the "Apply complete!" message.
- Open your Google Cloud project to view the results. In the Google Cloud console, navigate to your resources in the UI to make sure that Terraform has created or updated them.
Give the service account access
Grant the connection's service account the Vertex AI User role.
If you plan to specify the endpoint as a URL when you create the remote model, for example endpoint = 'https://us-central1-aiplatform.googleapis.com/v1/projects/myproject/locations/us-central1/publishers/google/models/text-embedding-004'
, grant this role in the same project you specify in the URL.
If you plan to specify the endpoint by using the model name when you create the remote model, for example endpoint = 'text-embedding-004'
, grant this role in the same project where you plan to create the remote model.
Granting the role in a different project results in the error bqcx-1234567890-xxxx@gcp-sa-bigquery-condel.iam.gserviceaccount.com does not have the permission to access resource
.
To grant the role, follow these steps:
Console
Go to the IAM & Admin page.
Click
Add.The Add principals dialog opens.
In the New principals field, enter the service account ID that you copied earlier.
In the Select a role field, select Vertex AI, and then select Vertex AI User.
Click Save.
gcloud
Use the
gcloud projects add-iam-policy-binding
command.
gcloud projects add-iam-policy-binding 'PROJECT_NUMBER' --member='serviceAccount:MEMBER' --role='roles/aiplatform.user' --condition=None
Replace the following:
PROJECT_NUMBER
: your project numberMEMBER
: the service account ID that you copied earlier
Enable the Vertex AI model
This step is only needed if you are using a Claude model.
In the Google Cloud console, go to the Vertex AI Model Garden page.
Search or browse for the Claude model that you want to use.
Click the model card.
On the model page, click Enable.
Fill out the requested enablement information, and then click Next.
In the Terms and conditions section, select the checkbox.
Click Agree to agree to the terms and conditions and enable the model.
Create a BigQuery ML remote model
In the Google Cloud console, go to the BigQuery page.
Using the SQL editor, create a remote model:
CREATE OR REPLACE MODEL `PROJECT_ID.DATASET_ID.MODEL_NAME` REMOTE WITH CONNECTION `PROJECT_ID.REGION.CONNECTION_ID` OPTIONS (ENDPOINT = 'ENDPOINT');
Replace the following:
PROJECT_ID
: your project IDDATASET_ID
: the ID of the dataset to contain the model. This dataset must be in the same location as the connection that you are usingMODEL_NAME
: the name of the modelREGION
: the region used by the connectionCONNECTION_ID
: the ID of your BigQuery connectionWhen you view the connection details in the Google Cloud console, this is the value in the last section of the fully qualified connection ID that is shown in Connection ID, for example
projects/myproject/locations/connection_location/connections/myconnection
ENDPOINT
: the name of the supported Vertex AI model to use.For some types of models, you can specify a particular version of the model. For information about supported model versions for different model types, see
ENDPOINT
.
Generate text from text data by using a prompt from a table
Generate text by using the
ML.GENERATE_TEXT
function
with a remote model, and using prompt data from a table column:
gemini-1.5-flash
SELECT * FROM ML.GENERATE_TEXT( MODEL `PROJECT_ID.DATASET_ID.MODEL_NAME`, TABLE PROJECT_ID.DATASET_ID.TABLE_NAME, STRUCT(TOKENS AS max_output_tokens, TEMPERATURE AS temperature, TOP_P AS top_p, FLATTEN_JSON AS flatten_json_output, STOP_SEQUENCES AS stop_sequences, GROUND_WITH_GOOGLE_SEARCH AS ground_with_google_search, SAFETY_SETTINGS AS safety_settings) );
Replace the following:
PROJECT_ID
: your project ID.DATASET_ID
: the ID of the dataset that contains the model.MODEL_NAME
: the name of the model.TABLE_NAME
: the name of the table that contains the prompt. This table must have a column that's namedprompt
, or you can use an alias to use a differently named column.TOKENS
: anINT64
value that sets the maximum number of tokens that can be generated in the response. This value must be in the range[1,8192]
. Specify a lower value for shorter responses and a higher value for longer responses. The default is128
.TEMPERATURE
: aFLOAT64
value in the range[0.0,2.0]
that controls the degree of randomness in token selection. The default is0
.Lower values for
temperature
are good for prompts that require a more deterministic and less open-ended or creative response, while higher values fortemperature
can lead to more diverse or creative results. A value of0
fortemperature
is deterministic, meaning that the highest probability response is always selected.TOP_P
: aFLOAT64
value in the range[0.0,1.0]
helps determine the probability of the tokens selected. Specify a lower value for less random responses and a higher value for more random responses. The default is0.95
.FLATTEN_JSON
: aBOOL
value that determines whether to return the generated text and the safety attributes in separate columns. The default isFALSE
.STOP_SEQUENCES
: anARRAY<STRING>
value that removes the specified strings if they are included in responses from the model. Strings are matched exactly, including capitalization. The default is an empty array.GROUND_WITH_GOOGLE_SEARCH
: aBOOL
value that determines whether the Vertex AI model uses Grounding with Google Search when generating responses. Grounding lets the model use additional information from the internet when generating a response, in order to make model responses more specific and factual. When bothflatten_json_output
and this field are set toTrue
, an additionalml_generate_text_grounding_result
column is included in the results, providing the sources that the model used to gather additional information. The default isFALSE
.SAFETY_SETTINGS
: anARRAY<STRUCT<STRING AS category, STRING AS threshold>>
value that configures content safety thresholds to filter responses. The first element in the struct specifies a harm category, and the second element in the struct specifies a corresponding blocking threshold. The model filters out content that violate these settings. You can only specify each category once. For example, you can't specify bothSTRUCT('HARM_CATEGORY_DANGEROUS_CONTENT' AS category, 'BLOCK_MEDIUM_AND_ABOVE' AS threshold)
andSTRUCT('HARM_CATEGORY_DANGEROUS_CONTENT' AS category, 'BLOCK_ONLY_HIGH' AS threshold)
. If there is no safety setting for a given category, theBLOCK_MEDIUM_AND_ABOVE
safety setting is used.Supported categories are as follows:
HARM_CATEGORY_HATE_SPEECH
HARM_CATEGORY_DANGEROUS_CONTENT
HARM_CATEGORY_HARASSMENT
HARM_CATEGORY_SEXUALLY_EXPLICIT
Supported thresholds are as follows:
BLOCK_NONE
(Restricted)BLOCK_LOW_AND_ABOVE
BLOCK_MEDIUM_AND_ABOVE
(Default)BLOCK_ONLY_HIGH
HARM_BLOCK_THRESHOLD_UNSPECIFIED
For more information, refer to the definition of safety category and blocking threshold.
Example
The following example shows a request with these characteristics:
- Uses the
prompt
column of theprompts
table for the prompt. - Returns the generated text and the safety attributes in separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, TABLE mydataset.prompts, STRUCT(TRUE AS flatten_json_output));
gemini-1.5-pro
SELECT * FROM ML.GENERATE_TEXT( MODEL `PROJECT_ID.DATASET_ID.MODEL_NAME`, TABLE PROJECT_ID.DATASET_ID.TABLE_NAME, STRUCT(TOKENS AS max_output_tokens, TEMPERATURE AS temperature, TOP_P AS top_p, FLATTEN_JSON AS flatten_json_output, STOP_SEQUENCES AS stop_sequences, GROUND_WITH_GOOGLE_SEARCH AS ground_with_google_search, SAFETY_SETTINGS AS safety_settings) );
Replace the following:
PROJECT_ID
: your project ID.DATASET_ID
: the ID of the dataset that contains the model.MODEL_NAME
: the name of the model.TABLE_NAME
: the name of the table that contains the prompt. This table must have a column that's namedprompt
, or you can use an alias to use a differently named column.TOKENS
: anINT64
value that sets the maximum number of tokens that can be generated in the response. This value must be in the range[1,8192]
. Specify a lower value for shorter responses and a higher value for longer responses. The default is128
.TEMPERATURE
: aFLOAT64
value in the range[0.0,2.0]
that controls the degree of randomness in token selection. The default is0
.Lower values for
temperature
are good for prompts that require a more deterministic and less open-ended or creative response, while higher values fortemperature
can lead to more diverse or creative results. A value of0
fortemperature
is deterministic, meaning that the highest probability response is always selected.TOP_P
: aFLOAT64
value in the range[0.0,1.0]
helps determine the probability of the tokens selected. Specify a lower value for less random responses and a higher value for more random responses. The default is0.95
.FLATTEN_JSON
: aBOOL
value that determines whether to return the generated text and the safety attributes in separate columns. The default isFALSE
.STOP_SEQUENCES
: anARRAY<STRING>
value that removes the specified strings if they are included in responses from the model. Strings are matched exactly, including capitalization. The default is an empty array.GROUND_WITH_GOOGLE_SEARCH
: aBOOL
value that determines whether the Vertex AI model uses Grounding with Google Search when generating responses. Grounding lets the model use additional information from the internet when generating a response, in order to make model responses more specific and factual. When bothflatten_json_output
and this field are set toTrue
, an additionalml_generate_text_grounding_result
column is included in the results, providing the sources that the model used to gather additional information. The default isFALSE
.SAFETY_SETTINGS
: anARRAY<STRUCT<STRING AS category, STRING AS threshold>>
value that configures content safety thresholds to filter responses. The first element in the struct specifies a harm category, and the second element in the struct specifies a corresponding blocking threshold. The model filters out content that violate these settings. You can only specify each category once. For example, you can't specify bothSTRUCT('HARM_CATEGORY_DANGEROUS_CONTENT' AS category, 'BLOCK_MEDIUM_AND_ABOVE' AS threshold)
andSTRUCT('HARM_CATEGORY_DANGEROUS_CONTENT' AS category, 'BLOCK_ONLY_HIGH' AS threshold)
. If there is no safety setting for a given category, theBLOCK_MEDIUM_AND_ABOVE
safety setting is used.Supported categories are as follows:
HARM_CATEGORY_HATE_SPEECH
HARM_CATEGORY_DANGEROUS_CONTENT
HARM_CATEGORY_HARASSMENT
HARM_CATEGORY_SEXUALLY_EXPLICIT
Supported thresholds are as follows:
BLOCK_NONE
(Restricted)BLOCK_LOW_AND_ABOVE
BLOCK_MEDIUM_AND_ABOVE
(Default)BLOCK_ONLY_HIGH
HARM_BLOCK_THRESHOLD_UNSPECIFIED
For more information, refer to the definition of safety category and blocking threshold.
Example
The following example shows a request with these characteristics:
- Uses the
prompt
column of theprompts
table for the prompt. - Returns the generated text and the safety attributes in separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, TABLE mydataset.prompts, STRUCT(TRUE AS flatten_json_output));
gemini-pro
SELECT * FROM ML.GENERATE_TEXT( MODEL `PROJECT_ID.DATASET_ID.MODEL_NAME`, TABLE PROJECT_ID.DATASET_ID.TABLE_NAME, STRUCT(TOKENS AS max_output_tokens, TEMPERATURE AS temperature, TOP_K AS top_k, TOP_P AS top_p, FLATTEN_JSON AS flatten_json_output, STOP_SEQUENCES AS stop_sequences, GROUND_WITH_GOOGLE_SEARCH AS ground_with_google_search, SAFETY_SETTINGS AS safety_settings) );
Replace the following:
PROJECT_ID
: your project ID.DATASET_ID
: the ID of the dataset that contains the model.MODEL_NAME
: the name of the model.TABLE_NAME
: the name of the table that contains the prompt. This table must have a column that's namedprompt
, or you can use an alias to use a differently named column.TOKENS
: anINT64
value that sets the maximum number of tokens that can be generated in the response. This value must be in the range[1,8192]
. Specify a lower value for shorter responses and a higher value for longer responses. The default is128
.TEMPERATURE
: aFLOAT64
value in the range[0.0,1.0]
that controls the degree of randomness in token selection. The default is0
.Lower values for
temperature
are good for prompts that require a more deterministic and less open-ended or creative response, while higher values fortemperature
can lead to more diverse or creative results. A value of0
fortemperature
is deterministic, meaning that the highest probability response is always selected.TOP_K
: anINT64
value in the range[1,40]
that determines the initial pool of tokens the model considers for selection. Specify a lower value for less random responses and a higher value for more random responses. The default is40
.TOP_P
: aFLOAT64
value in the range[0.0,1.0]
helps determine the probability of the tokens selected. Specify a lower value for less random responses and a higher value for more random responses. The default is0.95
.FLATTEN_JSON
: aBOOL
value that determines whether to return the generated text and the safety attributes in separate columns. The default isFALSE
.STOP_SEQUENCES
: anARRAY<STRING>
value that removes the specified strings if they are included in responses from the model. Strings are matched exactly, including capitalization. The default is an empty array.GROUND_WITH_GOOGLE_SEARCH
: aBOOL
value that determines whether the Vertex AI model uses Grounding with Google Search when generating responses. Grounding lets the model use additional information from the internet when generating a response, in order to make model responses more specific and factual. When bothflatten_json_output
and this field are set toTrue
, an additionalml_generate_text_grounding_result
column is included in the results, providing the sources that the model used to gather additional information. The default isFALSE
.SAFETY_SETTINGS
: anARRAY<STRUCT<STRING AS category, STRING AS threshold>>
value that configures content safety thresholds to filter responses. The first element in the struct specifies a harm category, and the second element in the struct specifies a corresponding blocking threshold. The model filters out content that violate these settings. You can only specify each category once. For example, you can't specify bothSTRUCT('HARM_CATEGORY_DANGEROUS_CONTENT' AS category, 'BLOCK_MEDIUM_AND_ABOVE' AS threshold)
andSTRUCT('HARM_CATEGORY_DANGEROUS_CONTENT' AS category, 'BLOCK_ONLY_HIGH' AS threshold)
. If there is no safety setting for a given category, theBLOCK_MEDIUM_AND_ABOVE
safety setting is used.Supported categories are as follows:
HARM_CATEGORY_HATE_SPEECH
HARM_CATEGORY_DANGEROUS_CONTENT
HARM_CATEGORY_HARASSMENT
HARM_CATEGORY_SEXUALLY_EXPLICIT
Supported thresholds are as follows:
BLOCK_NONE
(Restricted)BLOCK_LOW_AND_ABOVE
BLOCK_MEDIUM_AND_ABOVE
(Default)BLOCK_ONLY_HIGH
HARM_BLOCK_THRESHOLD_UNSPECIFIED
For more information, refer to the definition of safety category and blocking threshold.
Example
The following example shows a request with these characteristics:
- Uses the
prompt
column of theprompts
table for the prompt. - Returns a short and moderately probable response.
- Flattens the JSON response into separate columns.
- Retrieves and returns public web data for response grounding.
- Filters out unsafe responses by using two safety settings.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, TABLE mydataset.prompts, STRUCT( 0.4 AS temperature, 100 AS max_output_tokens, 0.5 AS top_p, 40 AS top_k, TRUE AS flatten_json_output, TRUE AS ground_with_google_search, [STRUCT('HARM_CATEGORY_HATE_SPEECH' AS category, 'BLOCK_LOW_AND_ABOVE' AS threshold), STRUCT('HARM_CATEGORY_DANGEROUS_CONTENT' AS category, 'BLOCK_MEDIUM_AND_ABOVE' AS threshold)] AS safety_settings));
Claude
SELECT * FROM ML.GENERATE_TEXT( MODEL `PROJECT_ID.DATASET_ID.MODEL_NAME`, TABLE PROJECT_ID.DATASET_ID.TABLE_NAME, STRUCT(TOKENS AS max_output_tokens, TOP_K AS top_k, TOP_P AS top_p, FLATTEN_JSON AS flatten_json_output) );
Replace the following:
PROJECT_ID
: your project ID.DATASET_ID
: the ID of the dataset that contains the model.MODEL_NAME
: the name of the model.TABLE_NAME
: the name of the table that contains the prompt. This table must have a column that's namedprompt
, or you can use an alias to use a differently named column.TOKENS
: anINT64
value that sets the maximum number of tokens that can be generated in the response. This value must be in the range[1,4096]
. Specify a lower value for shorter responses and a higher value for longer responses. The default is128
.TOP_K
: anINT64
value in the range[1,40]
that determines the initial pool of tokens the model considers for selection. Specify a lower value for less random responses and a higher value for more random responses. If you don't specify a value, the model determines an appropriate value.TOP_P
: aFLOAT64
value in the range[0.0,1.0]
helps determine the probability of the tokens selected. Specify a lower value for less random responses and a higher value for more random responses. If you don't specify a value, the model determines an appropriate value.FLATTEN_JSON
: aBOOL
value that determines whether to return the generated text and the safety attributes in separate columns. The default isFALSE
.
Example
The following example shows a request with these characteristics:
- Uses the
prompt
column of theprompts
table for the prompt. - Returns the generated text and the safety attributes in separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, TABLE mydataset.prompts, STRUCT(TRUE AS flatten_json_output));
text-bison
SELECT * FROM ML.GENERATE_TEXT( MODEL `PROJECT_ID.DATASET_ID.MODEL_NAME`, TABLE PROJECT_ID.DATASET_ID.TABLE_NAME, STRUCT(TOKENS AS max_output_tokens, TEMPERATURE AS temperature, TOP_K AS top_k, TOP_P AS top_p, FLATTEN_JSON AS flatten_json_output, STOP_SEQUENCES AS stop_sequences) );
Replace the following:
PROJECT_ID
: your project ID.DATASET_ID
: the ID of the dataset that contains the model.MODEL_NAME
: the name of the model.TABLE_NAME
: the name of the table that contains the prompt. This table must have a column that's namedprompt
, or you can use an alias to use a differently named column.TOKENS
: anINT64
value that sets the maximum number of tokens that can be generated in the response. This value must be in the range[1,1024]
. Specify a lower value for shorter responses and a higher value for longer responses. The default is128
.TEMPERATURE
: aFLOAT64
value in the range[0.0,1.0]
that controls the degree of randomness in token selection. The default is0
.Lower values for
temperature
are good for prompts that require a more deterministic and less open-ended or creative response, while higher values fortemperature
can lead to more diverse or creative results. A value of0
fortemperature
is deterministic, meaning that the highest probability response is always selected.TOP_K
: anINT64
value in the range[1,40]
that determines the initial pool of tokens the model considers for selection. Specify a lower value for less random responses and a higher value for more random responses. The default is40
.TOP_P
: aFLOAT64
value in the range[0.0,1.0]
helps determine the probability of the tokens selected. Specify a lower value for less random responses and a higher value for more random responses. The default is0.95
.FLATTEN_JSON
: aBOOL
value that determines whether to return the generated text and the safety attributes in separate columns. The default isFALSE
.STOP_SEQUENCES
: anARRAY<STRING>
value that removes the specified strings if they are included in responses from the model. Strings are matched exactly, including capitalization. The default is an empty array.
Example
The following example shows a request with these characteristics:
- Uses the
prompt
column of theprompts
table for the prompt. - Returns the generated text and the safety attributes in separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, TABLE mydataset.prompts, STRUCT(TRUE AS flatten_json_output));
text-bison32
SELECT * FROM ML.GENERATE_TEXT( MODEL `PROJECT_ID.DATASET_ID.MODEL_NAME`, TABLE PROJECT_ID.DATASET_ID.TABLE_NAME, STRUCT(TOKENS AS max_output_tokens, TEMPERATURE AS temperature, TOP_K AS top_k, TOP_P AS top_p, FLATTEN_JSON AS flatten_json_output, STOP_SEQUENCES AS stop_sequences) );
Replace the following:
PROJECT_ID
: your project ID.DATASET_ID
: the ID of the dataset that contains the model.MODEL_NAME
: the name of the model.TABLE_NAME
: the name of the table that contains the prompt. This table must have a column that's namedprompt
, or you can use an alias to use a differently named column.TOKENS
: anINT64
value that sets the maximum number of tokens that can be generated in the response. This value must be in the range[1,8192]
. Specify a lower value for shorter responses and a higher value for longer responses. The default is128
.TEMPERATURE
: aFLOAT64
value in the range[0.0,1.0]
that controls the degree of randomness in token selection. The default is0
.Lower values for
temperature
are good for prompts that require a more deterministic and less open-ended or creative response, while higher values fortemperature
can lead to more diverse or creative results. A value of0
fortemperature
is deterministic, meaning that the highest probability response is always selected.TOP_K
: anINT64
value in the range[1,40]
that determines the initial pool of tokens the model considers for selection. Specify a lower value for less random responses and a higher value for more random responses. The default is40
.TOP_P
: aFLOAT64
value in the range[0.0,1.0]
helps determine the probability of the tokens selected. Specify a lower value for less random responses and a higher value for more random responses. The default is0.95
.FLATTEN_JSON
: aBOOL
value that determines whether to return the generated text and the safety attributes in separate columns. The default isFALSE
.STOP_SEQUENCES
: anARRAY<STRING>
value that removes the specified strings if they are included in responses from the model. Strings are matched exactly, including capitalization. The default is an empty array.
Example
The following example shows a request with these characteristics:
- Uses the
prompt
column of theprompts
table for the prompt. - Returns the generated text and the safety attributes in separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, TABLE mydataset.prompts, STRUCT(TRUE AS flatten_json_output));
text-unicorn
SELECT * FROM ML.GENERATE_TEXT( MODEL `PROJECT_ID.DATASET_ID.MODEL_NAME`, TABLE PROJECT_ID.DATASET_ID.TABLE_NAME, STRUCT(TOKENS AS max_output_tokens, TEMPERATURE AS temperature, TOP_K AS top_k, TOP_P AS top_p, FLATTEN_JSON AS flatten_json_output, STOP_SEQUENCES AS stop_sequences) );
Replace the following:
PROJECT_ID
: your project ID.DATASET_ID
: the ID of the dataset that contains the model.MODEL_NAME
: the name of the model.TABLE_NAME
: the name of the table that contains the prompt. This table must have a column that's namedprompt
, or you can use an alias to use a differently named column.TOKENS
: anINT64
value that sets the maximum number of tokens that can be generated in the response. This value must be in the range[1,1024]
. Specify a lower value for shorter responses and a higher value for longer responses. The default is128
.TEMPERATURE
: aFLOAT64
value in the range[0.0,1.0]
that controls the degree of randomness in token selection. The default is0
.Lower values for
temperature
are good for prompts that require a more deterministic and less open-ended or creative response, while higher values fortemperature
can lead to more diverse or creative results. A value of0
fortemperature
is deterministic, meaning that the highest probability response is always selected.TOP_K
: anINT64
value in the range[1,40]
that determines the initial pool of tokens the model considers for selection. Specify a lower value for less random responses and a higher value for more random responses. The default is40
.TOP_P
: aFLOAT64
value in the range[0.0,1.0]
helps determine the probability of the tokens selected. Specify a lower value for less random responses and a higher value for more random responses. The default is0.95
.FLATTEN_JSON
: aBOOL
value that determines whether to return the generated text and the safety attributes in separate columns. The default isFALSE
.STOP_SEQUENCES
: anARRAY<STRING>
value that removes the specified strings if they are included in responses from the model. Strings are matched exactly, including capitalization. The default is an empty array.
Example
The following example shows a request with these characteristics:
- Uses the
prompt
column of theprompts
table for the prompt. - Returns the generated text and the safety attributes in separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, TABLE mydataset.prompts, STRUCT(TRUE AS flatten_json_output));
Generate text from text data by using a prompt from a query
Generate text by using the
ML.GENERATE_TEXT
function
with a remote model, and using a query that provides the prompt data:
gemini-1.5-flash
SELECT * FROM ML.GENERATE_TEXT( MODEL `PROJECT_ID.DATASET_ID.MODEL_NAME`, (PROMPT_QUERY), STRUCT(TOKENS AS max_output_tokens, TEMPERATURE AS temperature, TOP_P AS top_p, FLATTEN_JSON AS flatten_json_output, STOP_SEQUENCES AS stop_sequences, GROUND_WITH_GOOGLE_SEARCH AS ground_with_google_search, SAFETY_SETTINGS AS safety_settings) );Replace the following:
PROJECT_ID
: your project ID.DATASET_ID
: the ID of the dataset that contains the model.MODEL_NAME
: the name of the model.PROMPT_QUERY
: a query that provides the prompt data.TOKENS
: anINT64
value that sets the maximum number of tokens that can be generated in the response. This value must be in the range[1,8192]
. Specify a lower value for shorter responses and a higher value for longer responses. The default is128
.TEMPERATURE
: aFLOAT64
value in the range[0.0,2.0]
that controls the degree of randomness in token selection. The default is0
.Lower values for
temperature
are good for prompts that require a more deterministic and less open-ended or creative response, while higher values fortemperature
can lead to more diverse or creative results. A value of0
fortemperature
is deterministic, meaning that the highest probability response is always selected.TOP_P
: aFLOAT64
value in the range[0.0,1.0]
helps determine the probability of the tokens selected. Specify a lower value for less random responses and a higher value for more random responses. The default is0.95
.FLATTEN_JSON
: aBOOL
value that determines whether to return the generated text and the safety attributes in separate columns. The default isFALSE
.STOP_SEQUENCES
: anARRAY<STRING>
value that removes the specified strings if they are included in responses from the model. Strings are matched exactly, including capitalization. The default is an empty array.GROUND_WITH_GOOGLE_SEARCH
: aBOOL
value that determines whether the Vertex AI model uses Grounding with Google Search when generating responses. Grounding lets the model use additional information from the internet when generating a response, in order to make model responses more specific and factual. When bothflatten_json_output
and this field are set toTrue
, an additionalml_generate_text_grounding_result
column is included in the results, providing the sources that the model used to gather additional information. The default isFALSE
.SAFETY_SETTINGS
: anARRAY<STRUCT<STRING AS category, STRING AS threshold>>
value that configures content safety thresholds to filter responses. The first element in the struct specifies a harm category, and the second element in the struct specifies a corresponding blocking threshold. The model filters out content that violate these settings. You can only specify each category once. For example, you can't specify bothSTRUCT('HARM_CATEGORY_DANGEROUS_CONTENT' AS category, 'BLOCK_MEDIUM_AND_ABOVE' AS threshold)
andSTRUCT('HARM_CATEGORY_DANGEROUS_CONTENT' AS category, 'BLOCK_ONLY_HIGH' AS threshold)
. If there is no safety setting for a given category, theBLOCK_MEDIUM_AND_ABOVE
safety setting is used.Supported categories are as follows:
HARM_CATEGORY_HATE_SPEECH
HARM_CATEGORY_DANGEROUS_CONTENT
HARM_CATEGORY_HARASSMENT
HARM_CATEGORY_SEXUALLY_EXPLICIT
Supported thresholds are as follows:
BLOCK_NONE
(Restricted)BLOCK_LOW_AND_ABOVE
BLOCK_MEDIUM_AND_ABOVE
(Default)BLOCK_ONLY_HIGH
HARM_BLOCK_THRESHOLD_UNSPECIFIED
For more information, refer to the definition of safety category and blocking threshold.
Example 1
The following example shows a request with these characteristics:
- Prompts for a summary of the text in the
body
column of thearticles
table. - Returns the generated text and the safety attributes in separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, ( SELECT CONCAT('Summarize this text', body) AS prompt FROM mydataset.articles ), STRUCT(TRUE AS flatten_json_output));
Example 2
The following example shows a request with these characteristics:
- Uses a query to create the prompt data by concatenating strings that provide prompt prefixes with table columns.
- Returns a short response.
- Doesn't return the generated text and the safety attributes in separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, ( SELECT CONCAT(question, 'Text:', description, 'Category') AS prompt FROM mydataset.input_table ), STRUCT( 100 AS max_output_tokens, FALSE AS flatten_json_output));
gemini-1.5-pro
SELECT * FROM ML.GENERATE_TEXT( MODEL `PROJECT_ID.DATASET_ID.MODEL_NAME`, (PROMPT_QUERY), STRUCT(TOKENS AS max_output_tokens, TEMPERATURE AS temperature, TOP_P AS top_p, FLATTEN_JSON AS flatten_json_output, STOP_SEQUENCES AS stop_sequences, GROUND_WITH_GOOGLE_SEARCH AS ground_with_google_search, SAFETY_SETTINGS AS safety_settings) );Replace the following:
PROJECT_ID
: your project ID.DATASET_ID
: the ID of the dataset that contains the model.MODEL_NAME
: the name of the model.PROMPT_QUERY
: a query that provides the prompt data.TOKENS
: anINT64
value that sets the maximum number of tokens that can be generated in the response. This value must be in the range[1,8192]
. Specify a lower value for shorter responses and a higher value for longer responses. The default is128
.TEMPERATURE
: aFLOAT64
value in the range[0.0,2.0]
that controls the degree of randomness in token selection. The default is0
.Lower values for
temperature
are good for prompts that require a more deterministic and less open-ended or creative response, while higher values fortemperature
can lead to more diverse or creative results. A value of0
fortemperature
is deterministic, meaning that the highest probability response is always selected.TOP_P
: aFLOAT64
value in the range[0.0,1.0]
helps determine the probability of the tokens selected. Specify a lower value for less random responses and a higher value for more random responses. The default is0.95
.FLATTEN_JSON
: aBOOL
value that determines whether to return the generated text and the safety attributes in separate columns. The default isFALSE
.STOP_SEQUENCES
: anARRAY<STRING>
value that removes the specified strings if they are included in responses from the model. Strings are matched exactly, including capitalization. The default is an empty array.GROUND_WITH_GOOGLE_SEARCH
: aBOOL
value that determines whether the Vertex AI model uses Grounding with Google Search when generating responses. Grounding lets the model use additional information from the internet when generating a response, in order to make model responses more specific and factual. When bothflatten_json_output
and this field are set toTrue
, an additionalml_generate_text_grounding_result
column is included in the results, providing the sources that the model used to gather additional information. The default isFALSE
.SAFETY_SETTINGS
: anARRAY<STRUCT<STRING AS category, STRING AS threshold>>
value that configures content safety thresholds to filter responses. The first element in the struct specifies a harm category, and the second element in the struct specifies a corresponding blocking threshold. The model filters out content that violate these settings. You can only specify each category once. For example, you can't specify bothSTRUCT('HARM_CATEGORY_DANGEROUS_CONTENT' AS category, 'BLOCK_MEDIUM_AND_ABOVE' AS threshold)
andSTRUCT('HARM_CATEGORY_DANGEROUS_CONTENT' AS category, 'BLOCK_ONLY_HIGH' AS threshold)
. If there is no safety setting for a given category, theBLOCK_MEDIUM_AND_ABOVE
safety setting is used.Supported categories are as follows:
HARM_CATEGORY_HATE_SPEECH
HARM_CATEGORY_DANGEROUS_CONTENT
HARM_CATEGORY_HARASSMENT
HARM_CATEGORY_SEXUALLY_EXPLICIT
Supported thresholds are as follows:
BLOCK_NONE
(Restricted)BLOCK_LOW_AND_ABOVE
BLOCK_MEDIUM_AND_ABOVE
(Default)BLOCK_ONLY_HIGH
HARM_BLOCK_THRESHOLD_UNSPECIFIED
For more information, refer to the definition of safety category and blocking threshold.
Example 1
The following example shows a request with these characteristics:
- Prompts for a summary of the text in the
body
column of thearticles
table. - Returns the generated text and the safety attributes in separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, ( SELECT CONCAT('Summarize this text', body) AS prompt FROM mydataset.articles ), STRUCT(TRUE AS flatten_json_output));
Example 2
The following example shows a request with these characteristics:
- Uses a query to create the prompt data by concatenating strings that provide prompt prefixes with table columns.
- Returns a short response.
- Doesn't return the generated text and the safety attributes in separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, ( SELECT CONCAT(question, 'Text:', description, 'Category') AS prompt FROM mydataset.input_table ), STRUCT( 100 AS max_output_tokens, FALSE AS flatten_json_output));
gemini-pro
SELECT * FROM ML.GENERATE_TEXT( MODEL `PROJECT_ID.DATASET_ID.MODEL_NAME`, (PROMPT_QUERY), STRUCT(TOKENS AS max_output_tokens, TEMPERATURE AS temperature, TOP_K AS top_k, TOP_P AS top_p, FLATTEN_JSON AS flatten_json_output, STOP_SEQUENCES AS stop_sequences, GROUND_WITH_GOOGLE_SEARCH AS ground_with_google_search, SAFETY_SETTINGS AS safety_settings) );Replace the following:
PROJECT_ID
: your project ID.DATASET_ID
: the ID of the dataset that contains the model.MODEL_NAME
: the name of the model.PROMPT_QUERY
: a query that provides the prompt data.TOKENS
: anINT64
value that sets the maximum number of tokens that can be generated in the response. This value must be in the range[1,8192]
. Specify a lower value for shorter responses and a higher value for longer responses. The default is128
.TEMPERATURE
: aFLOAT64
value in the range[0.0,1.0]
that controls the degree of randomness in token selection. The default is0
.Lower values for
temperature
are good for prompts that require a more deterministic and less open-ended or creative response, while higher values fortemperature
can lead to more diverse or creative results. A value of0
fortemperature
is deterministic, meaning that the highest probability response is always selected.TOP_K
: anINT64
value in the range[1,40]
that determines the initial pool of tokens the model considers for selection. Specify a lower value for less random responses and a higher value for more random responses. The default is40
.TOP_P
: aFLOAT64
value in the range[0.0,1.0]
helps determine the probability of the tokens selected. Specify a lower value for less random responses and a higher value for more random responses. The default is0.95
.FLATTEN_JSON
: aBOOL
value that determines whether to return the generated text and the safety attributes in separate columns. The default isFALSE
.STOP_SEQUENCES
: anARRAY<STRING>
value that removes the specified strings if they are included in responses from the model. Strings are matched exactly, including capitalization. The default is an empty array.GROUND_WITH_GOOGLE_SEARCH
: aBOOL
value that determines whether the Vertex AI model uses Grounding with Google Search when generating responses. Grounding lets the model use additional information from the internet when generating a response, in order to make model responses more specific and factual. When bothflatten_json_output
and this field are set toTrue
, an additionalml_generate_text_grounding_result
column is included in the results, providing the sources that the model used to gather additional information. The default isFALSE
.SAFETY_SETTINGS
: anARRAY<STRUCT<STRING AS category, STRING AS threshold>>
value that configures content safety thresholds to filter responses. The first element in the struct specifies a harm category, and the second element in the struct specifies a corresponding blocking threshold. The model filters out content that violate these settings. You can only specify each category once. For example, you can't specify bothSTRUCT('HARM_CATEGORY_DANGEROUS_CONTENT' AS category, 'BLOCK_MEDIUM_AND_ABOVE' AS threshold)
andSTRUCT('HARM_CATEGORY_DANGEROUS_CONTENT' AS category, 'BLOCK_ONLY_HIGH' AS threshold)
. If there is no safety setting for a given category, theBLOCK_MEDIUM_AND_ABOVE
safety setting is used.Supported categories are as follows:
HARM_CATEGORY_HATE_SPEECH
HARM_CATEGORY_DANGEROUS_CONTENT
HARM_CATEGORY_HARASSMENT
HARM_CATEGORY_SEXUALLY_EXPLICIT
Supported thresholds are as follows:
BLOCK_NONE
(Restricted)BLOCK_LOW_AND_ABOVE
BLOCK_MEDIUM_AND_ABOVE
(Default)BLOCK_ONLY_HIGH
HARM_BLOCK_THRESHOLD_UNSPECIFIED
For more information, refer to the definition of safety category and blocking threshold.
Example 1
The following example shows a request with these characteristics:
- Prompts for a summary of the text in the
body
column of thearticles
table. - Returns the generated text and the safety attributes in separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, ( SELECT CONCAT('Summarize this text', body) AS prompt FROM mydataset.articles ), STRUCT(TRUE AS flatten_json_output));
Example 2
The following example shows a request with these characteristics:
- Uses a query to create the prompt data by concatenating strings that provide prompt prefixes with table columns.
- Returns a short response.
- Doesn't return the generated text and the safety attributes in separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, ( SELECT CONCAT(question, 'Text:', description, 'Category') AS prompt FROM mydataset.input_table ), STRUCT( 100 AS max_output_tokens, FALSE AS flatten_json_output));
Example 3
The following example shows a request with these characteristics:
- Prompts for a summary of the text in the
body
column of thearticles
table. - Flattens the JSON response into separate columns.
- Retrieves and returns public web data for response grounding.
- Filters out unsafe responses by using two safety settings.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, ( SELECT CONCAT('Summarize this text', body) AS prompt FROM mydataset.articles ), STRUCT( TRUE AS flatten_json_output, TRUE AS ground_with_google_search, [STRUCT('HARM_CATEGORY_HATE_SPEECH' AS category, 'BLOCK_LOW_AND_ABOVE' AS threshold), STRUCT('HARM_CATEGORY_DANGEROUS_CONTENT' AS category, 'BLOCK_MEDIUM_AND_ABOVE' AS threshold)] AS safety_settings));
Claude
SELECT * FROM ML.GENERATE_TEXT( MODEL `PROJECT_ID.DATASET_ID.MODEL_NAME`, (PROMPT_QUERY), STRUCT(TOKENS AS max_output_tokens, TOP_K AS top_k, TOP_P AS top_p, FLATTEN_JSON AS flatten_json_output) );Replace the following:
PROJECT_ID
: your project ID.DATASET_ID
: the ID of the dataset that contains the model.MODEL_NAME
: the name of the model.PROMPT_QUERY
: a query that provides the prompt data.TOKENS
: anINT64
value that sets the maximum number of tokens that can be generated in the response. This value must be in the range[1,4096]
. Specify a lower value for shorter responses and a higher value for longer responses. The default is128
.TOP_K
: anINT64
value in the range[1,40]
that determines the initial pool of tokens the model considers for selection. Specify a lower value for less random responses and a higher value for more random responses. If you don't specify a value, the model determines an appropriate value.TOP_P
: aFLOAT64
value in the range[0.0,1.0]
helps determine the probability of the tokens selected. Specify a lower value for less random responses and a higher value for more random responses. If you don't specify a value, the model determines an appropriate value.FLATTEN_JSON
: aBOOL
value that determines whether to return the generated text and the safety attributes in separate columns. The default isFALSE
.
Example 1
The following example shows a request with these characteristics:
- Prompts for a summary of the text in the
body
column of thearticles
table. - Returns the generated text and the safety attributes in separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, ( SELECT CONCAT('Summarize this text', body) AS prompt FROM mydataset.articles ), STRUCT(TRUE AS flatten_json_output));
Example 2
The following example shows a request with these characteristics:
- Uses a query to create the prompt data by concatenating strings that provide prompt prefixes with table columns.
- Returns a short response.
- Doesn't return the generated text and the safety attributes in separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, ( SELECT CONCAT(question, 'Text:', description, 'Category') AS prompt FROM mydataset.input_table ), STRUCT( 100 AS max_output_tokens, FALSE AS flatten_json_output));
text-bison
SELECT * FROM ML.GENERATE_TEXT( MODEL `PROJECT_ID.DATASET_ID.MODEL_NAME`, (PROMPT_QUERY), STRUCT(TOKENS AS max_output_tokens, TEMPERATURE AS temperature, TOP_K AS top_k, TOP_P AS top_p, FLATTEN_JSON AS flatten_json_output, STOP_SEQUENCES AS stop_sequences) );Replace the following:
PROJECT_ID
: your project ID.DATASET_ID
: the ID of the dataset that contains the model.MODEL_NAME
: the name of the model.PROMPT_QUERY
: a query that provides the prompt data.TOKENS
: anINT64
value that sets the maximum number of tokens that can be generated in the response. This value must be in the range[1,1024]
. Specify a lower value for shorter responses and a higher value for longer responses. The default is128
.TEMPERATURE
: aFLOAT64
value in the range[0.0,1.0]
that controls the degree of randomness in token selection. The default is0
.Lower values for
temperature
are good for prompts that require a more deterministic and less open-ended or creative response, while higher values fortemperature
can lead to more diverse or creative results. A value of0
fortemperature
is deterministic, meaning that the highest probability response is always selected.TOP_K
: anINT64
value in the range[1,40]
that determines the initial pool of tokens the model considers for selection. Specify a lower value for less random responses and a higher value for more random responses. The default is40
.TOP_P
: aFLOAT64
value in the range[0.0,1.0]
helps determine the probability of the tokens selected. Specify a lower value for less random responses and a higher value for more random responses. The default is0.95
.FLATTEN_JSON
: aBOOL
value that determines whether to return the generated text and the safety attributes in separate columns. The default isFALSE
.STOP_SEQUENCES
: anARRAY<STRING>
value that removes the specified strings if they are included in responses from the model. Strings are matched exactly, including capitalization. The default is an empty array.
Example 1
The following example shows a request with these characteristics:
- Prompts for a summary of the text in the
body
column of thearticles
table. - Returns the generated text and the safety attributes in separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, ( SELECT CONCAT('Summarize this text', body) AS prompt FROM mydataset.articles ), STRUCT(TRUE AS flatten_json_output));
Example 2
The following example shows a request with these characteristics:
- Uses a query to create the prompt data by concatenating strings that provide prompt prefixes with table columns.
- Returns a short response.
- Doesn't return the generated text and the safety attributes in separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, ( SELECT CONCAT(question, 'Text:', description, 'Category') AS prompt FROM mydataset.input_table ), STRUCT( 100 AS max_output_tokens, FALSE AS flatten_json_output));
text-bison32
SELECT * FROM ML.GENERATE_TEXT( MODEL `PROJECT_ID.DATASET_ID.MODEL_NAME`, (PROMPT_QUERY), STRUCT(TOKENS AS max_output_tokens, TEMPERATURE AS temperature, TOP_K AS top_k, TOP_P AS top_p, FLATTEN_JSON AS flatten_json_output, STOP_SEQUENCES AS stop_sequences) );Replace the following:
PROJECT_ID
: your project ID.DATASET_ID
: the ID of the dataset that contains the model.MODEL_NAME
: the name of the model.PROMPT_QUERY
: a query that provides the prompt data.TOKENS
: anINT64
value that sets the maximum number of tokens that can be generated in the response. This value must be in the range[1,8192]
. Specify a lower value for shorter responses and a higher value for longer responses. The default is128
.TEMPERATURE
: aFLOAT64
value in the range[0.0,1.0]
that controls the degree of randomness in token selection. The default is0
.Lower values for
temperature
are good for prompts that require a more deterministic and less open-ended or creative response, while higher values fortemperature
can lead to more diverse or creative results. A value of0
fortemperature
is deterministic, meaning that the highest probability response is always selected.TOP_K
: anINT64
value in the range[1,40]
that determines the initial pool of tokens the model considers for selection. Specify a lower value for less random responses and a higher value for more random responses. The default is40
.TOP_P
: aFLOAT64
value in the range[0.0,1.0]
helps determine the probability of the tokens selected. Specify a lower value for less random responses and a higher value for more random responses. The default is0.95
.FLATTEN_JSON
: aBOOL
value that determines whether to return the generated text and the safety attributes in separate columns. The default isFALSE
.STOP_SEQUENCES
: anARRAY<STRING>
value that removes the specified strings if they are included in responses from the model. Strings are matched exactly, including capitalization. The default is an empty array.
Example 1
The following example shows a request with these characteristics:
- Prompts for a summary of the text in the
body
column of thearticles
table. - Returns the generated text and the safety attributes in separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, ( SELECT CONCAT('Summarize this text', body) AS prompt FROM mydataset.articles ), STRUCT(TRUE AS flatten_json_output));
Example 2
The following example shows a request with these characteristics:
- Uses a query to create the prompt data by concatenating strings that provide prompt prefixes with table columns.
- Returns a short response.
- Doesn't return the generated text and the safety attributes in separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, ( SELECT CONCAT(question, 'Text:', description, 'Category') AS prompt FROM mydataset.input_table ), STRUCT( 100 AS max_output_tokens, FALSE AS flatten_json_output));
text-unicorn
SELECT * FROM ML.GENERATE_TEXT( MODEL `PROJECT_ID.DATASET_ID.MODEL_NAME`, (PROMPT_QUERY), STRUCT(TOKENS AS max_output_tokens, TEMPERATURE AS temperature, TOP_K AS top_k, TOP_P AS top_p, FLATTEN_JSON AS flatten_json_output, STOP_SEQUENCES AS stop_sequences) );Replace the following:
PROJECT_ID
: your project ID.DATASET_ID
: the ID of the dataset that contains the model.MODEL_NAME
: the name of the model.PROMPT_QUERY
: a query that provides the prompt data.TOKENS
: anINT64
value that sets the maximum number of tokens that can be generated in the response. This value must be in the range[1,1024]
. Specify a lower value for shorter responses and a higher value for longer responses. The default is128
.TEMPERATURE
: aFLOAT64
value in the range[0.0,1.0]
that controls the degree of randomness in token selection. The default is0
.Lower values for
temperature
are good for prompts that require a more deterministic and less open-ended or creative response, while higher values fortemperature
can lead to more diverse or creative results. A value of0
fortemperature
is deterministic, meaning that the highest probability response is always selected.TOP_K
: anINT64
value in the range[1,40]
that determines the initial pool of tokens the model considers for selection. Specify a lower value for less random responses and a higher value for more random responses. The default is40
.TOP_P
: aFLOAT64
value in the range[0.0,1.0]
helps determine the probability of the tokens selected. Specify a lower value for less random responses and a higher value for more random responses. The default is0.95
.FLATTEN_JSON
: aBOOL
value that determines whether to return the generated text and the safety attributes in separate columns. The default isFALSE
.STOP_SEQUENCES
: anARRAY<STRING>
value that removes the specified strings if they are included in responses from the model. Strings are matched exactly, including capitalization. The default is an empty array.
Example 1
The following example shows a request with these characteristics:
- Prompts for a summary of the text in the
body
column of thearticles
table. - Returns the generated text and the safety attributes in separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, ( SELECT CONCAT('Summarize this text', body) AS prompt FROM mydataset.articles ), STRUCT(TRUE AS flatten_json_output));
Example 2
The following example shows a request with these characteristics:
- Uses a query to create the prompt data by concatenating strings that provide prompt prefixes with table columns.
- Returns a short response.
- Doesn't return the generated text and the safety attributes in separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, ( SELECT CONCAT(question, 'Text:', description, 'Category') AS prompt FROM mydataset.input_table ), STRUCT( 100 AS max_output_tokens, FALSE AS flatten_json_output));
Generate text from object table data
Generate text by using the
ML.GENERATE_TEXT
function
with a remote model, using an object table to provide the content to analyze
and providing the prompt data in the prompt
parameter:
gemini-1.5-flash
SELECT * FROM ML.GENERATE_TEXT( MODEL `PROJECT_ID.DATASET_ID.MODEL_NAME`, TABLE PROJECT_ID.DATASET_ID.TABLE_NAME, STRUCT(PROMPT AS prompt, TOKENS AS max_output_tokens, TEMPERATURE AS temperature, TOP_P AS top_p, FLATTEN_JSON AS flatten_json_output, STOP_SEQUENCES AS stop_sequences, SAFETY_SETTINGS AS safety_settings) );
Replace the following:
PROJECT_ID
: your project ID.DATASET_ID
: the ID of the dataset that contains the model.MODEL_NAME
: the name of the model.TABLE_NAME
: the name of the object table that contains the content to analyze. For more information on what types of content you can analyze, see Input.The Cloud Storage bucket used by the object table should be in the same project where you have created the model and where you are calling the
ML.GENERATE_TEXT
function. If you want to call theML.GENERATE_TEXT
function in a different project than the one that contains the Cloud Storage bucket used by the object table, you must grant the Storage Admin role at the bucket level to theservice-A@gcp-sa-aiplatform.iam.gserviceaccount.com
service account.PROMPT
: the prompt to use to analyze the content.TOKENS
: anINT64
value that sets the maximum number of tokens that can be generated in the response. This value must be in the range[1,8192]
. Specify a lower value for shorter responses and a higher value for longer responses. The default is128
.TEMPERATURE
: aFLOAT64
value in the range[0.0,2.0]
that controls the degree of randomness in token selection. The default is0
.Lower values for
temperature
are good for prompts that require a more deterministic and less open-ended or creative response, while higher values fortemperature
can lead to more diverse or creative results. A value of0
fortemperature
is deterministic, meaning that the highest probability response is always selected.TOP_P
: aFLOAT64
value in the range[0.0,1.0]
helps determine the probability of the tokens selected. Specify a lower value for less random responses and a higher value for more random responses. The default is0.95
.FLATTEN_JSON
: aBOOL
value that determines whether to return the generated text and the safety attributes in separate columns. The default isFALSE
.STOP_SEQUENCES
: anARRAY<STRING>
value that removes the specified strings if they are included in responses from the model. Strings are matched exactly, including capitalization. The default is an empty array.SAFETY_SETTINGS
: anARRAY<STRUCT<STRING AS category, STRING AS threshold>>
value that configures content safety thresholds to filter responses. The first element in the struct specifies a harm category, and the second element in the struct specifies a corresponding blocking threshold. The model filters out content that violate these settings. You can only specify each category once. For example, you can't specify bothSTRUCT('HARM_CATEGORY_DANGEROUS_CONTENT' AS category, 'BLOCK_MEDIUM_AND_ABOVE' AS threshold)
andSTRUCT('HARM_CATEGORY_DANGEROUS_CONTENT' AS category, 'BLOCK_ONLY_HIGH' AS threshold)
. If there is no safety setting for a given category, theBLOCK_MEDIUM_AND_ABOVE
safety setting is used.Supported categories are as follows:
HARM_CATEGORY_HATE_SPEECH
HARM_CATEGORY_DANGEROUS_CONTENT
HARM_CATEGORY_HARASSMENT
HARM_CATEGORY_SEXUALLY_EXPLICIT
Supported thresholds are as follows:
BLOCK_NONE
(Restricted)BLOCK_LOW_AND_ABOVE
BLOCK_MEDIUM_AND_ABOVE
(Default)BLOCK_ONLY_HIGH
HARM_BLOCK_THRESHOLD_UNSPECIFIED
For more information, refer to the definition of safety category and blocking threshold.
Examples
This example analyzes video content from an object table that's named
videos
and describes the content in each video:
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.video_model`, TABLE `mydataset.videos`, STRUCT('What is happening in this video?' AS PROMPT, TRUE AS FLATTEN_JSON_OUTPUT));
This example translates and transcribes audio content from an object table
that's named feedback
:
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.audio_model`, TABLE `mydataset.feedback`, STRUCT('What is the content of this audio clip, translated into Spanish?' AS PROMPT, TRUE AS FLATTEN_JSON_OUTPUT));
This example classifies PDF content from an object table
that's named invoices
:
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.classify_model`, TABLE `mydataset.invoices`, STRUCT('Classify this document based on the invoice total, using the following categories: 0 to 100, 101 to 200, greater than 200' AS PROMPT, TRUE AS FLATTEN_JSON_OUTPUT));
gemini-1.5-pro
SELECT * FROM ML.GENERATE_TEXT( MODEL `PROJECT_ID.DATASET_ID.MODEL_NAME`, TABLE PROJECT_ID.DATASET_ID.TABLE_NAME, STRUCT(PROMPT AS prompt, TOKENS AS max_output_tokens, TEMPERATURE AS temperature, TOP_P AS top_p, FLATTEN_JSON AS flatten_json_output, STOP_SEQUENCES AS stop_sequences, SAFETY_SETTINGS AS safety_settings) );
Replace the following:
PROJECT_ID
: your project ID.DATASET_ID
: the ID of the dataset that contains the model.MODEL_NAME
: the name of the model.TABLE_NAME
: the name of the object table that contains the content to analyze. For more information on what types of content you can analyze, see Input.The Cloud Storage bucket used by the object table should be in the same project where you have created the model and where you are calling the
ML.GENERATE_TEXT
function. If you want to call theML.GENERATE_TEXT
function in a different project than the one that contains the Cloud Storage bucket used by the object table, you must grant the Storage Admin role at the bucket level to theservice-A@gcp-sa-aiplatform.iam.gserviceaccount.com
service account.PROMPT
: the prompt to use to analyze the content.TOKENS
: anINT64
value that sets the maximum number of tokens that can be generated in the response. This value must be in the range[1,8192]
. Specify a lower value for shorter responses and a higher value for longer responses. The default is128
.TEMPERATURE
: aFLOAT64
value in the range[0.0,2.0]
that controls the degree of randomness in token selection. The default is0
.Lower values for
temperature
are good for prompts that require a more deterministic and less open-ended or creative response, while higher values fortemperature
can lead to more diverse or creative results. A value of0
fortemperature
is deterministic, meaning that the highest probability response is always selected.TOP_P
: aFLOAT64
value in the range[0.0,1.0]
helps determine the probability of the tokens selected. Specify a lower value for less random responses and a higher value for more random responses. The default is0.95
.FLATTEN_JSON
: aBOOL
value that determines whether to return the generated text and the safety attributes in separate columns. The default isFALSE
.STOP_SEQUENCES
: anARRAY<STRING>
value that removes the specified strings if they are included in responses from the model. Strings are matched exactly, including capitalization. The default is an empty array.SAFETY_SETTINGS
: anARRAY<STRUCT<STRING AS category, STRING AS threshold>>
value that configures content safety thresholds to filter responses. The first element in the struct specifies a harm category, and the second element in the struct specifies a corresponding blocking threshold. The model filters out content that violate these settings. You can only specify each category once. For example, you can't specify bothSTRUCT('HARM_CATEGORY_DANGEROUS_CONTENT' AS category, 'BLOCK_MEDIUM_AND_ABOVE' AS threshold)
andSTRUCT('HARM_CATEGORY_DANGEROUS_CONTENT' AS category, 'BLOCK_ONLY_HIGH' AS threshold)
. If there is no safety setting for a given category, theBLOCK_MEDIUM_AND_ABOVE
safety setting is used.Supported categories are as follows:
HARM_CATEGORY_HATE_SPEECH
HARM_CATEGORY_DANGEROUS_CONTENT
HARM_CATEGORY_HARASSMENT
HARM_CATEGORY_SEXUALLY_EXPLICIT
Supported thresholds are as follows:
BLOCK_NONE
(Restricted)BLOCK_LOW_AND_ABOVE
BLOCK_MEDIUM_AND_ABOVE
(Default)BLOCK_ONLY_HIGH
HARM_BLOCK_THRESHOLD_UNSPECIFIED
For more information, refer to the definition of safety category and blocking threshold.
Examples
This example analyzes video content from an object table that's named
videos
and describes the content in each video:
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.video_model`, TABLE `mydataset.videos`, STRUCT('What is happening in this video?' AS PROMPT, TRUE AS FLATTEN_JSON_OUTPUT));
This example translates and transcribes audio content from an object table
that's named feedback
:
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.audio_model`, TABLE `mydataset.feedback`, STRUCT('What is the content of this audio clip, translated into Spanish?' AS PROMPT, TRUE AS FLATTEN_JSON_OUTPUT));
This example classifies PDF content from an object table
that's named invoices
:
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.classify_model`, TABLE `mydataset.invoices`, STRUCT('Classify this document based on the invoice total, using the following categories: 0 to 100, 101 to 200, greater than 200' AS PROMPT, TRUE AS FLATTEN_JSON_OUTPUT));
gemini-pro-vision
SELECT * FROM ML.GENERATE_TEXT( MODEL `PROJECT_ID.DATASET_ID.MODEL_NAME`, TABLE PROJECT_ID.DATASET_ID.TABLE_NAME, STRUCT(PROMPT AS prompt, TOKENS AS max_output_tokens, TEMPERATURE AS temperature, TOP_K AS top_k, TOP_P AS top_p, FLATTEN_JSON AS flatten_json_output, STOP_SEQUENCES AS stop_sequences, SAFETY_SETTINGS AS safety_settings) );
Replace the following:
PROJECT_ID
: your project ID.DATASET_ID
: the ID of the dataset that contains the model.MODEL_NAME
: the name of the model.TABLE_NAME
: the name of the object table that contains the content to analyze. For more information on what types of content you can analyze, see Input.The Cloud Storage bucket used by the object table should be in the same project where you have created the model and where you are calling the
ML.GENERATE_TEXT
function. If you want to call theML.GENERATE_TEXT
function in a different project than the one that contains the Cloud Storage bucket used by the object table, you must grant the Storage Admin role at the bucket level to theservice-A@gcp-sa-aiplatform.iam.gserviceaccount.com
service account.PROMPT
: the prompt to use to analyze the content.TOKENS
: anINT64
value that sets the maximum number of tokens that can be generated in the response. This value must be in the range[1,2048]
. Specify a lower value for shorter responses and a higher value for longer responses. The default is2048
.TEMPERATURE
: aFLOAT64
value in the range[0.0,1.0]
that controls the degree of randomness in token selection. The default is0.4
.Lower values for
temperature
are good for prompts that require a more deterministic and less open-ended or creative response, while higher values fortemperature
can lead to more diverse or creative results. A value of0
fortemperature
is deterministic, meaning that the highest probability response is always selected.TOP_K
: anINT64
value in the range[1,40]
that determines the initial pool of tokens the model considers for selection. Specify a lower value for less random responses and a higher value for more random responses. The default is32
.TOP_P
: aFLOAT64
value in the range[0.0,1.0]
helps determine the probability of the tokens selected. Specify a lower value for less random responses and a higher value for more random responses. The default is0.95
.FLATTEN_JSON
: aBOOL
value that determines whether to return the generated text and the safety attributes in separate columns. The default isFALSE
.STOP_SEQUENCES
: anARRAY<STRING>
value that removes the specified strings if they are included in responses from the model. Strings are matched exactly, including capitalization. The default is an empty array.SAFETY_SETTINGS
: anARRAY<STRUCT<STRING AS category, STRING AS threshold>>
value that configures content safety thresholds to filter responses. The first element in the struct specifies a harm category, and the second element in the struct specifies a corresponding blocking threshold. The model filters out content that violate these settings. You can only specify each category once. For example, you can't specify bothSTRUCT('HARM_CATEGORY_DANGEROUS_CONTENT' AS category, 'BLOCK_MEDIUM_AND_ABOVE' AS threshold)
andSTRUCT('HARM_CATEGORY_DANGEROUS_CONTENT' AS category, 'BLOCK_ONLY_HIGH' AS threshold)
. If there is no safety setting for a given category, theBLOCK_MEDIUM_AND_ABOVE
safety setting is used.Supported categories are as follows:
HARM_CATEGORY_HATE_SPEECH
HARM_CATEGORY_DANGEROUS_CONTENT
HARM_CATEGORY_HARASSMENT
HARM_CATEGORY_SEXUALLY_EXPLICIT
Supported thresholds are as follows:
BLOCK_NONE
(Restricted)BLOCK_LOW_AND_ABOVE
BLOCK_MEDIUM_AND_ABOVE
(Default)BLOCK_ONLY_HIGH
HARM_BLOCK_THRESHOLD_UNSPECIFIED
For more information, refer to the definition of safety category and blocking threshold.
Examples
This example analyzes video content from an object table that's named
videos
and describes the content in each video:
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.video_model`, TABLE `mydataset.videos`, STRUCT('What is happening in this video?' AS PROMPT, TRUE AS FLATTEN_JSON_OUTPUT));