Get batch text embeddings predictions

Getting responses in a batch is a way to efficiently send large numbers of non-latency sensitive embeddings requests. Different from getting online responses, where you are limited to one input request at a time, you can send a large number of LLM requests in a single batch request. Similar to how batch prediction is done for tabular data in Vertex AI, you determine your output location, add your input, and your responses asynchronously populate into your output location.

After you submit a batch request and review its results, you can tweak the model through model tuning. After tuning, you can submit your updated model for batch generations as usual. To learn more about tuning models, see Tune language foundation models.

Text embeddings models that support batch predictions

All stable versions of the textembedding-gecko model support batch predictions, except for textembedding-gecko-multilingual@001. Stable versions are versions which are no longer in preview and are fully supported for production environments. To see the full list of supported embedding models, see Embedding model and versions.

Prepare your inputs

The input for batch requests are a list of prompts that can either be stored in a BigQuery table or as a JSON Lines (JSONL) file in Cloud Storage. Each request can include up to 30,000 prompts.

JSONL example

This section shows examples of how to format JSONL input and output.

JSONL input example

{"content":"Give a short description of a machine learning model:"}
{"content":"Best recipe for banana bread:"}

JSONL output example

{"instance":{"content":"Give..."},"predictions": [{"embeddings":{"statistics":{"token_count":8,"truncated":false},"values":[0.2,....]}}],"status":""}
{"instance":{"content":"Best..."},"predictions": [{"embeddings":{"statistics":{"token_count":3,"truncated":false},"values":[0.1,....]}}],"status":""}

BigQuery example

This section shows examples of how to format BigQuery input and output.

BigQuery input example

This example shows a single column BigQuery table.

content
"Give a short description of a machine learning model:"
"Best recipe for banana bread:"

BigQuery output example

content predictions status
"Give a short description of a machine learning model:"
'[{"embeddings":
    { "statistics":{"token_count":8,"truncated":false},
      "Values":[0.1,....]
    }
  }
]'
 
"Best recipe for banana bread:"
'[{"embeddings":
    { "statistics":{"token_count":3,"truncated":false},
      "Values":[0.2,....]
    }
  }
]'

Request a batch response

Depending on the number of input items that you've submitted, a batch generation task can take some time to complete.

REST

To test a text prompt by using the Vertex AI API, send a POST request to the publisher model endpoint.

Before using any of the request data, make the following replacements:

  • PROJECT_ID: The ID of your Google Cloud project.
  • BP_JOB_NAME: The job name.
  • INPUT_URI: The input source URI. This is either a BigQuery table URI or a JSONL file URI in Cloud Storage.
  • OUTPUT_URI: Output target URI.

HTTP method and URL:

POST https://us-central1-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/batchPredictionJobs

Request JSON body:

{
    "name": "BP_JOB_NAME",
    "displayName": "BP_JOB_NAME",
    "model": "publishers/google/models/textembedding-gecko",
    "inputConfig": {
      "instancesFormat":"bigquery",
      "bigquerySource":{
        "inputUri" : "INPUT_URI"
      }
    },
    "outputConfig": {
      "predictionsFormat":"bigquery",
      "bigqueryDestination":{
        "outputUri": "OUTPUT_URI"
    }
  }
}

To send your request, choose one of these options:

curl

Save the request body in a file named request.json, and execute the following command:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://us-central1-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/batchPredictionJobs"

PowerShell

Save the request body in a file named request.json, and execute the following command:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://us-central1-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/batchPredictionJobs" | Select-Object -Expand Content

You should receive a JSON response similar to the following:

{
  "name": "projects/123456789012/locations/us-central1/batchPredictionJobs/1234567890123456789",
  "displayName": "BP_sample_publisher_BQ_20230712_134650",
  "model": "projects/{PROJECT_ID}/locations/us-central1/models/textembedding-gecko",
  "inputConfig": {
    "instancesFormat": "bigquery",
    "bigquerySource": {
      "inputUri": "bq://project_name.dataset_name.text_input"
    }
  },
  "modelParameters": {},
  "outputConfig": {
    "predictionsFormat": "bigquery",
    "bigqueryDestination": {
      "outputUri": "bq://project_name.llm_dataset.embedding_out_BP_sample_publisher_BQ_20230712_134650"
    }
  },
  "state": "JOB_STATE_PENDING",
  "createTime": "2023-07-12T20:46:52.148717Z",
  "updateTime": "2023-07-12T20:46:52.148717Z",
  "labels": {
    "owner": "sample_owner",
    "product": "llm"
  },
  "modelVersionId": "1",
  "modelMonitoringStatus": {}
}

The response includes a unique identifier for the batch job. You can poll for the status of the batch job using the BATCH_JOB_ID until the job state is JOB_STATE_SUCCEEDED. For example:

curl \
  -X GET \
  -H "Authorization: Bearer $(gcloud auth print-access-token)" \
  -H "Content-Type: application/json" \
https://us-central1-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/batchPredictionJobs/BATCH_JOB_ID

Python

To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Python API reference documentation.

import vertexai
from vertexai.preview import language_models

# TODO(developer): Update & uncomment line below
# PROJECT_ID = "your-project-id"
vertexai.init(project=PROJECT_ID, location="us-central1")
input_uri = (
    "gs://cloud-samples-data/generative-ai/embeddings/embeddings_input.jsonl"
)
# Format: `"gs://your-bucket-unique-name/directory/` or `bq://project_name.llm_dataset`
output_uri = OUTPUT_URI

textembedding_model = language_models.TextEmbeddingModel.from_pretrained(
    "textembedding-gecko@003"
)

batch_prediction_job = textembedding_model.batch_predict(
    dataset=[input_uri],
    destination_uri_prefix=output_uri,
)
print(batch_prediction_job.display_name)
print(batch_prediction_job.resource_name)
print(batch_prediction_job.state)
# Example response:
# BatchPredictionJob 2024-09-10 15:47:51.336391
# projects/1234567890/locations/us-central1/batchPredictionJobs/123456789012345
# JobState.JOB_STATE_SUCCEEDED

Retrieve batch output

When a batch prediction task is complete, the output is stored in the Cloud Storage bucket or BigQuery table that you specified in your request.

What's next