This page shows you how to invoke online predictions from a Cloud SQL for PostgreSQL instance.
Cloud SQL lets you get online predictions in
your SQL code by calling the ml_predict_row()
function. For more information, see Build generative AI
applications using Cloud SQL.
Before you begin
Before you can invoke online predictions from a Cloud SQL instance, you must prepare your database and select an appropriate ML model.
Prepare your database
To prepare your database, complete the following steps:
Grant permissions for database users to use the
ml_predict_row()
function to run predictions:Connect a
psql
client to the primary instance, as described in Connect using a a psql client.At the
psql
command prompt, connect to the database and grant permissions:\c DB_NAME GRANT EXECUTE ON FUNCTION ml_predict_row TO USER_NAME;
Replace the following:
DB_NAME: the name of the database for which you're granting permissions
USER_NAME: the name of the user for whom you're granting permissions
Select an ML model
When you call the ml_predict_row()
function, you must specify the location of an ML model.
The model that you specify can be one of these:
A model that's running in the Vertex AI Model Garden.
The
ml_predict_row()
function supports invoking predictions only on tabular or custom models.A Vertex AI model with an active endpoint that you have Identity and Access Management (IAM) permission to access.
Cloud SQL doesn't support private endpoints for getting online predictions.
Invoke online predictions
You can use the ml_predict_row()
SQL function to invoke online predictions
against your data.
The format of the function's initial argument depends on whether the ML model that you want to use is in the Vertex AI Model Garden or is an endpoint running in a Google Cloud project.
Use a model in the Vertex AI Model Garden
To invoke an online prediction using an ML model that's running in the Vertex AI
Model Garden, use the following syntax for the ml_predict_row()
SQL function:
SELECT ml_predict_row('publishers/google/models/MODEL_ID', '{ "instances": [ INSTANCES ], "parameters":
PARAMETERS }');
Make the following replacements:
MODEL_ID: the ID of the ML model to use (for example,
text-bison
for PaLM 2 for Text)INSTANCES: the inputs to the prediction call, in JSON format
PARAMETERS: the parameters to the prediction call, in JSON format
SELECT ml_predict_row('publishers/google/models/MODEL_ID', '{ "instances": [ INSTANCES ], "parameters":
PARAMETERS }');
For information about the model's JSON response messages, see Generative AI foundational model reference. For examples, see Example invocations.
Use a Vertex AI model endpoint
To invoke an online prediction using a Vertex AI model endpoint, use the following syntax for the ml_predict_row()
SQL function:
SELECT ml_predict_row('endpoints/ENDPOINT_ID', '{ "instances": [ INSTANCES ], "parameters":
PARAMETERS }');
Make the following replacements:
ENDPOINT_ID: the ID of the model's endpoint
INSTANCES: the inputs to the prediction call, in JSON format
PARAMETERS: the parameters to the prediction call, in JSON format
For information about the model's JSON response messages, see PredictResponse.
Example invocations
The following example uses PaLM 2 for Text, available in the Model Garden, to generate text based on a short prompt that's provided as a literal argument
to ml_predict_row()
:
select ML_PREDICT_ROW('projects/PROJECT_ID/locations/us-central1/publishers/google/models/text-bison', '{"instances":[{"prompt": "What are three advantages of using Cloud SQL as my SQL database server?"}], "parameters":{"maxOutputTokens":1024, "topK": 40, "topP":0.8, "temperature":0.2}}');
The response is a JSON object. For more information about the format of the object, see Response body.
The next example modifies the previous one in the following ways:
The example uses the contents of the current database's
messages.message
column as input.The example demonstrates the use of the
json_build_object()
function as an aid to formatting the function's parameters.
select ML_PREDICT_ROW('projects/PROJECT_ID/locations/us-central1/publishers/google/models/text-bison', json_build_object('instances', json_build_object('prompt', message), 'parameters', json_build_object('maxOutputTokens', 1024,'topK', 40,'topP', 0.8,'temperature', 0.2))) from messages;
For every row in the messages
table, the returned JSON object now contains one entry in its predictions
array.
Because the response is a JSON object, you can pull specific fields from it using the PostgreSQL arrow operator:
SELECT ML_PREDICT_ROW('projects/PROJECT_ID/locations/us-central1/publishers/google/models/text-bison',json_build_object('instances', json_build_object('prompt',message), 'parameters', json_build_object('maxOutputTokens', 1024,'topK', 40,'topP', 0.8,'temperature', 0.2)))->'predictions'->0->'content' FROM messages;
For more example arguments to ml_predict_row()
, see Try the Vertex AI Gemini
API.