Obtén predicciones de incorporaciones de texto por lotes
Organiza tus páginas con colecciones
Guarda y categoriza el contenido según tus preferencias.
Obtener respuestas en un lote es una manera de enviar de manera eficiente grandes cantidades de solicitudes de incorporaciones no sensibles a la latencia. A diferencia de obtener respuestas en línea, en la que estás limitado a una solicitud de entrada a la vez, puedes enviar una gran cantidad de solicitudes de LLM en una sola solicitud por lotes. De manera similar a como se realiza la predicción por lotes para los datos tabulares en Vertex AI, debes determinar la ubicación de la salida, agregar la entrada y las respuestas se propagan de forma asíncrona en la ubicación de salida.
Modelos de incorporación de texto que admiten predicciones por lotes
Todas las versiones estables de los modelos de incorporación de texto admiten predicciones por lotes. Las versiones estables son versiones que ya no se encuentran en la versión preliminar y que son totalmente compatibles con los entornos de producción. Para ver la lista completa de modelos de incorporación compatibles, consulta Modelo y versiones de incorporación.
Prepara tus entradas
La entrada para las solicitudes por lotes es una lista de mensajes que se pueden almacenar en una tabla de BigQuery o como un archivo de líneas JSON (JSONL) en Cloud Storage. Cada solicitud puede incluir hasta 30,000 mensajes.
Ejemplo de JSONL
En esta sección, se muestran ejemplos de cómo dar formato a la entrada y la salida de JSONL.
Ejemplo de entrada JSONL
{"content":"Give a short description of a machine learning model:"}{"content":"Best recipe for banana bread:"}
Cuando se completa una tarea de predicción por lotes, el resultado se almacena en el bucket de Cloud Storage o en la tabla de BigQuery que especificaste en la solicitud.
[[["Fácil de comprender","easyToUnderstand","thumb-up"],["Resolvió mi problema","solvedMyProblem","thumb-up"],["Otro","otherUp","thumb-up"]],[["Difícil de entender","hardToUnderstand","thumb-down"],["Información o código de muestra incorrectos","incorrectInformationOrSampleCode","thumb-down"],["Faltan la información o los ejemplos que necesito","missingTheInformationSamplesINeed","thumb-down"],["Problema de traducción","translationIssue","thumb-down"],["Otro","otherDown","thumb-down"]],["Última actualización: 2025-09-04 (UTC)"],[],[],null,["# Get batch text embeddings predictions\n\nGetting responses in a batch is a way to efficiently send large numbers of non-latency\nsensitive embeddings requests. Different from getting online responses,\nwhere you are limited to one input request at a time, you can send a large number\nof LLM requests in a single batch request. Similar to how batch prediction is done\nfor [tabular data in Vertex AI](/vertex-ai/docs/tabular-data/classification-regression/get-batch-predictions),\nyou determine your output location, add your input, and your responses asynchronously\npopulate into your output location.\n\nText embeddings models that support batch predictions\n-----------------------------------------------------\n\nAll stable versions of text embedding models support batch predictions. Stable\nversions are versions which are no longer in preview and are fully supported for\nproduction environments. To see the full list of supported embedding models, see\n[Embedding model and versions](/vertex-ai/generative-ai/docs/learn/model-versioning#embedding_models_and_versions).\n\nPrepare your inputs\n-------------------\n\nThe input for batch requests are a list of prompts that can either be stored in\na BigQuery table or as a\n[JSON Lines (JSONL)](https://jsonlines.org/) file in\nCloud Storage. Each request can include up to 30,000 prompts.\n\n### JSONL example\n\nThis section shows examples of how to format JSONL input and output.\n\n#### JSONL input example\n\n {\"content\":\"Give a short description of a machine learning model:\"}\n {\"content\":\"Best recipe for banana bread:\"}\n\n#### JSONL output example\n\n {\"instance\":{\"content\":\"Give...\"},\"predictions\": [{\"embeddings\":{\"statistics\":{\"token_count\":8,\"truncated\":false},\"values\":[0.2,....]}}],\"status\":\"\"}\n {\"instance\":{\"content\":\"Best...\"},\"predictions\": [{\"embeddings\":{\"statistics\":{\"token_count\":3,\"truncated\":false},\"values\":[0.1,....]}}],\"status\":\"\"}\n\n### BigQuery example\n\nThis section shows examples of how to format BigQuery input and output.\n\n#### BigQuery input example\n\nThis example shows a single column BigQuery table.\n\n#### BigQuery output example\n\nRequest a batch response\n------------------------\n\nDepending on the number of input items that you've submitted, a\nbatch generation task can take some time to complete. \n\n### REST\n\nTo test a text prompt by using the Vertex AI API, send a POST request to the\npublisher model endpoint.\n\n\nBefore using any of the request data,\nmake the following replacements:\n\n- \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e: The ID of your Google Cloud project.\n- \u003cvar translate=\"no\"\u003eBP_JOB_NAME\u003c/var\u003e: The job name.\n- \u003cvar translate=\"no\"\u003eINPUT_URI\u003c/var\u003e: The input source URI. This is either a BigQuery table URI or a JSONL file URI in Cloud Storage.\n- \u003cvar translate=\"no\"\u003eOUTPUT_URI\u003c/var\u003e: Output target URI.\n\n\nHTTP method and URL:\n\n```\nPOST https://us-central1-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/batchPredictionJobs\n```\n\n\nRequest JSON body:\n\n```\n{\n \"name\": \"BP_JOB_NAME\",\n \"displayName\": \"BP_JOB_NAME\",\n \"model\": \"publishers/google/models/textembedding-gecko\",\n \"inputConfig\": {\n \"instancesFormat\":\"bigquery\",\n \"bigquerySource\":{\n \"inputUri\" : \"INPUT_URI\"\n }\n },\n \"outputConfig\": {\n \"predictionsFormat\":\"bigquery\",\n \"bigqueryDestination\":{\n \"outputUri\": \"OUTPUT_URI\"\n }\n }\n}\n\n```\n\nTo send your request, choose one of these options: \n\n#### curl\n\n| **Note:** The following command assumes that you have logged in to the `gcloud` CLI with your user account by running [`gcloud init`](/sdk/gcloud/reference/init) or [`gcloud auth login`](/sdk/gcloud/reference/auth/login) , or by using [Cloud Shell](/shell/docs), which automatically logs you into the `gcloud` CLI . You can check the currently active account by running [`gcloud auth list`](/sdk/gcloud/reference/auth/list).\n\n\nSave the request body in a file named `request.json`,\nand execute the following command:\n\n```\ncurl -X POST \\\n -H \"Authorization: Bearer $(gcloud auth print-access-token)\" \\\n -H \"Content-Type: application/json; charset=utf-8\" \\\n -d @request.json \\\n \"https://us-central1-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/batchPredictionJobs\"\n```\n\n#### PowerShell\n\n| **Note:** The following command assumes that you have logged in to the `gcloud` CLI with your user account by running [`gcloud init`](/sdk/gcloud/reference/init) or [`gcloud auth login`](/sdk/gcloud/reference/auth/login) . You can check the currently active account by running [`gcloud auth list`](/sdk/gcloud/reference/auth/list).\n\n\nSave the request body in a file named `request.json`,\nand execute the following command:\n\n```\n$cred = gcloud auth print-access-token\n$headers = @{ \"Authorization\" = \"Bearer $cred\" }\n\nInvoke-WebRequest `\n -Method POST `\n -Headers $headers `\n -ContentType: \"application/json; charset=utf-8\" `\n -InFile request.json `\n -Uri \"https://us-central1-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/batchPredictionJobs\" | Select-Object -Expand Content\n```\n\nYou should receive a JSON response similar to the following:\n\n```\n{\n \"name\": \"projects/123456789012/locations/us-central1/batchPredictionJobs/1234567890123456789\",\n \"displayName\": \"BP_sample_publisher_BQ_20230712_134650\",\n \"model\": \"projects/{PROJECT_ID}/locations/us-central1/models/textembedding-gecko\",\n \"inputConfig\": {\n \"instancesFormat\": \"bigquery\",\n \"bigquerySource\": {\n \"inputUri\": \"bq://project_name.dataset_name.text_input\"\n }\n },\n \"modelParameters\": {},\n \"outputConfig\": {\n \"predictionsFormat\": \"bigquery\",\n \"bigqueryDestination\": {\n \"outputUri\": \"bq://project_name.llm_dataset.embedding_out_BP_sample_publisher_BQ_20230712_134650\"\n }\n },\n \"state\": \"JOB_STATE_PENDING\",\n \"createTime\": \"2023-07-12T20:46:52.148717Z\",\n \"updateTime\": \"2023-07-12T20:46:52.148717Z\",\n \"labels\": {\n \"owner\": \"sample_owner\",\n \"product\": \"llm\"\n },\n \"modelVersionId\": \"1\",\n \"modelMonitoringStatus\": {}\n}\n```\n\nThe response includes a unique identifier for the batch job.\nYou can poll for the status of the batch job using\nthe \u003cvar translate=\"no\"\u003eBATCH_JOB_ID\u003c/var\u003e until the job `state` is\n`JOB_STATE_SUCCEEDED`. For example: \n\n```bash\ncurl \\\n -X GET \\\n -H \"Authorization: Bearer $(gcloud auth print-access-token)\" \\\n -H \"Content-Type: application/json\" \\\nhttps://us-central1-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/batchPredictionJobs/BATCH_JOB_ID\n```\n| **Note:** You can run only one batch response job at a time. Custom Service accounts, live progress, CMEK, and VPC-SC reports aren't supported at this time.\n\n### Python\n\n#### Install\n\n```\npip install --upgrade google-genai\n```\n\n\nTo learn more, see the\n[SDK reference documentation](https://googleapis.github.io/python-genai/).\n\n\nSet environment variables to use the Gen AI SDK with Vertex AI:\n\n```bash\n# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values\n# with appropriate values for your project.\nexport GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT\nexport GOOGLE_CLOUD_LOCATION=us-central1\nexport GOOGLE_GENAI_USE_VERTEXAI=True\n```\n\n\u003cbr /\u003e\n\n import time\n\n from google import genai\n from google.genai.types import CreateBatchJobConfig, JobState, HttpOptions\n\n client = genai.Client(http_options=HttpOptions(api_version=\"v1\"))\n # TODO(developer): Update and un-comment below line\n # output_uri = \"gs://your-bucket/your-prefix\"\n\n # See the documentation: https://googleapis.github.io/python-genai/genai.html#genai.batches.Batches.create\n job = client.batches.create(\n model=\"text-embedding-005\",\n # Source link: https://storage.cloud.google.com/cloud-samples-data/generative-ai/embeddings/embeddings_input.jsonl\n src=\"gs://cloud-samples-data/generative-ai/embeddings/embeddings_input.jsonl\",\n config=CreateBatchJobConfig(dest=output_uri),\n )\n print(f\"Job name: {job.name}\")\n print(f\"Job state: {job.state}\")\n # Example response:\n # Job name: projects/%PROJECT_ID%/locations/us-central1/batchPredictionJobs/9876453210000000000\n # Job state: JOB_STATE_PENDING\n\n # See the documentation: https://googleapis.github.io/python-genai/genai.html#genai.types.BatchJob\n completed_states = {\n JobState.JOB_STATE_SUCCEEDED,\n JobState.JOB_STATE_FAILED,\n JobState.JOB_STATE_CANCELLED,\n JobState.JOB_STATE_PAUSED,\n }\n\n while job.state not in completed_states:\n time.sleep(30)\n job = client.batches.get(name=job.name)\n print(f\"Job state: {job.state}\")\n if job.state == JobState.JOB_STATE_FAILED:\n print(f\"Error: {job.error}\")\n break\n\n # Example response:\n # Job state: JOB_STATE_PENDING\n # Job state: JOB_STATE_RUNNING\n # Job state: JOB_STATE_RUNNING\n # ...\n # Job state: JOB_STATE_SUCCEEDED\n\n\u003cbr /\u003e\n\nRetrieve batch output\n---------------------\n\nWhen a batch prediction task is complete, the output is stored\nin the Cloud Storage bucket or BigQuery table that you specified\nin your request.\n\nWhat's next\n-----------\n\n- Learn how to [get text embeddings](/vertex-ai/generative-ai/docs/embeddings/get-text-embeddings)."]]