适用于 Cloud Storage 的批量预测

本页面介绍了如何使用 Cloud Storage 获取批量预测结果。

1. 准备输入

Gemini 模型的批量接受存储在 Cloud Storage 中的一个 JSON 行 (JSONL) 文件作为输入数据。批量输入数据中的每行都是对模型的请求,遵循 Gemini API 的相同格式

例如:

{"request":{"contents": [{"role": "user", "parts": [{"text": "What is the relation between the following video and image samples?"}, {"fileData": {"fileUri": "gs://cloud-samples-data/generative-ai/video/animals.mp4", "mimeType": "video/mp4"}}, {"fileData": {"fileUri": "gs://cloud-samples-data/generative-ai/image/cricket.jpeg", "mimeType": "image/jpeg"}}]}], "generationConfig": {"temperature": 0.9, "topP": 1, "maxOutputTokens": 256}}}

下载示例批量请求文件

准备好输入数据并将其上传到 Cloud Storage 后,确保 AI Platform Service Agent 拥有 Cloud Storage 文件的权限。

2. 提交批量作业

您可以通过 Google Cloud 控制台、Google Gen AI SDK 或 REST API 创建批量作业。

控制台

  1. 在 Google Cloud 控制台的 Vertex AI 部分中,前往批量推理页面。

    前往“批量推理”

  2. 点击创建

REST

如需创建批量预测作业,请使用 projects.locations.batchPredictionJobs.create 方法。

在使用任何请求数据之前,请先进行以下替换:

  • LOCATION:支持 Gemini 模型的区域。
  • PROJECT_ID:您的项目 ID
  • MODEL_PATH:发布方模型名称,例如 publishers/google/models/gemini-2.5-flash;或调优后的端点名称,例如 projects/PROJECT_ID/locations/LOCATION/models/MODEL_ID,其中 MODEL_ID 是调优后的模型的模型 ID。
  • INPUT_URI:JSONL 批量预测输入的 Cloud Storage 位置,例如 gs://bucketname/path/to/file.jsonl
  • OUTPUT_FORMAT:如需输出到 Cloud Storage 存储桶,请指定 jsonl
  • DESTINATION:对于 BigQuery,请指定 bigqueryDestination。对于 Cloud Storage,请指定 gcsDestination
  • OUTPUT_URI_FIELD_NAME:对于 BigQuery,请指定 outputUri。对于 Cloud Storage,请指定 outputUriPrefix
  • OUTPUT_URI:对于 BigQuery,请指定表位置,例如 bq://myproject.mydataset.output_result。输出 BigQuery 数据集的区域必须与 Vertex AI 批量预测作业的区域相同。 对于 Cloud Storage,请指定存储桶和目录位置,例如 gs://mybucket/path/to/output

HTTP 方法和网址:

POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/batchPredictionJobs

请求 JSON 正文:

{
  "displayName": "my-cloud-storage-batch-prediction-job",
  "model": "MODEL_PATH",
  "inputConfig": {
    "instancesFormat": "jsonl",
    "gcsSource": {
      "uris" : "INPUT_URI"
    }
  },
  "outputConfig": {
    "predictionsFormat": "OUTPUT_FORMAT",
    "DESTINATION": {
      "OUTPUT_URI_FIELD_NAME": "OUTPUT_URI"
    }
  }
}

如需发送请求,请选择以下方式之一:

curl

将请求正文保存在名为 request.json 的文件中,然后执行以下命令:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/batchPredictionJobs"

PowerShell

将请求正文保存在名为 request.json 的文件中,然后执行以下命令:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/batchPredictionJobs" | Select-Object -Expand Content

您应该收到类似以下内容的 JSON 响应。

响应包含批量作业的唯一标识符。 您可以使用 BATCH_JOB_ID 轮询批量作业的状态。如需了解详情,请参阅监控作业状态。 注意:不支持自定义服务账号、实时进度、CMEK 和 VPCSC 报告。

Python

安装

pip install --upgrade google-genai

如需了解详情,请参阅 SDK 参考文档

设置环境变量以将 Gen AI SDK 与 Vertex AI 搭配使用:

# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values
# with appropriate values for your project.
export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT
export GOOGLE_CLOUD_LOCATION=global
export GOOGLE_GENAI_USE_VERTEXAI=True

import time

from google import genai
from google.genai.types import CreateBatchJobConfig, JobState, HttpOptions

client = genai.Client(http_options=HttpOptions(api_version="v1"))
# TODO(developer): Update and un-comment below line
# output_uri = "gs://your-bucket/your-prefix"

# See the documentation: https://googleapis.github.io/python-genai/genai.html#genai.batches.Batches.create
job = client.batches.create(
    # To use a tuned model, set the model param to your tuned model using the following format:
    # model="projects/{PROJECT_ID}/locations/{LOCATION}/models/{MODEL_ID}
    model="gemini-2.5-flash",
    # Source link: https://storage.cloud.google.com/cloud-samples-data/batch/prompt_for_batch_gemini_predict.jsonl
    src="gs://cloud-samples-data/batch/prompt_for_batch_gemini_predict.jsonl",
    config=CreateBatchJobConfig(dest=output_uri),
)
print(f"Job name: {job.name}")
print(f"Job state: {job.state}")
# Example response:
# Job name: projects/%PROJECT_ID%/locations/us-central1/batchPredictionJobs/9876453210000000000
# Job state: JOB_STATE_PENDING

# See the documentation: https://googleapis.github.io/python-genai/genai.html#genai.types.BatchJob
completed_states = {
    JobState.JOB_STATE_SUCCEEDED,
    JobState.JOB_STATE_FAILED,
    JobState.JOB_STATE_CANCELLED,
    JobState.JOB_STATE_PAUSED,
}

while job.state not in completed_states:
    time.sleep(30)
    job = client.batches.get(name=job.name)
    print(f"Job state: {job.state}")
# Example response:
# Job state: JOB_STATE_PENDING
# Job state: JOB_STATE_RUNNING
# Job state: JOB_STATE_RUNNING
# ...
# Job state: JOB_STATE_SUCCEEDED

3. 监控作业状态和进度

提交作业后,您可以使用 API、SDK 和 Cloud 控制台检查批量作业的状态

控制台

  1. 前往批量推理页面。

    前往“批量推理”

  2. 选择您的批量作业以监控进度。

REST

如需监控批量预测作业,请使用 projects.locations.batchPredictionJobs.get 方法,并查看响应中的 CompletionStats 字段。

在使用任何请求数据之前,请先进行以下替换:

  • LOCATION:支持 Gemini 模型的区域。
  • PROJECT_ID:。
  • BATCH_JOB_ID:您的批量作业 ID。

HTTP 方法和网址:

GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/batchPredictionJobs/BATCH_JOB_ID

如需发送请求,请选择以下方式之一:

curl

执行以下命令:

curl -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/batchPredictionJobs/BATCH_JOB_ID"

PowerShell

执行以下命令:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/batchPredictionJobs/BATCH_JOB_ID" | Select-Object -Expand Content

您应该收到类似以下内容的 JSON 响应。

Python

安装

pip install --upgrade google-genai

如需了解详情,请参阅 SDK 参考文档

设置环境变量以将 Gen AI SDK 与 Vertex AI 搭配使用:

# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values
# with appropriate values for your project.
export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT
export GOOGLE_CLOUD_LOCATION=global
export GOOGLE_GENAI_USE_VERTEXAI=True

import time

from google import genai
from google.genai.types import CreateBatchJobConfig, JobState, HttpOptions

client = genai.Client(http_options=HttpOptions(api_version="v1"))
# TODO(developer): Update and un-comment below line
# output_uri = "gs://your-bucket/your-prefix"

# See the documentation: https://googleapis.github.io/python-genai/genai.html#genai.batches.Batches.create
job = client.batches.create(
    # To use a tuned model, set the model param to your tuned model using the following format:
    # model="projects/{PROJECT_ID}/locations/{LOCATION}/models/{MODEL_ID}
    model="gemini-2.5-flash",
    # Source link: https://storage.cloud.google.com/cloud-samples-data/batch/prompt_for_batch_gemini_predict.jsonl
    src="gs://cloud-samples-data/batch/prompt_for_batch_gemini_predict.jsonl",
    config=CreateBatchJobConfig(dest=output_uri),
)
print(f"Job name: {job.name}")
print(f"Job state: {job.state}")
# Example response:
# Job name: projects/%PROJECT_ID%/locations/us-central1/batchPredictionJobs/9876453210000000000
# Job state: JOB_STATE_PENDING

# See the documentation: https://googleapis.github.io/python-genai/genai.html#genai.types.BatchJob
completed_states = {
    JobState.JOB_STATE_SUCCEEDED,
    JobState.JOB_STATE_FAILED,
    JobState.JOB_STATE_CANCELLED,
    JobState.JOB_STATE_PAUSED,
}

while job.state not in completed_states:
    time.sleep(30)
    job = client.batches.get(name=job.name)
    print(f"Job state: {job.state}")
# Example response:
# Job state: JOB_STATE_PENDING
# Job state: JOB_STATE_RUNNING
# Job state: JOB_STATE_RUNNING
# ...
# Job state: JOB_STATE_SUCCEEDED

给定批量作业的状态可以是以下任何一种:

  • JOB_STATE_PENDING:容量的队列。作业在进入 running 状态之前,最多可以处于 queue 状态 72 小时。
  • JOB_STATE_RUNNING:输入文件已成功验证,目前正在运行批量作业。
  • JOB_STATE_SUCCEEDED:批量作业已完成,结果已准备就绪
  • JOB_STATE_FAILED:输入文件未通过验证流程,或者无法在进入 RUNNING 状态后的 24 小时内完成验证。
  • JOB_STATE_CANCELLING:正在取消批量作业
  • JOB_STATE_CANCELLED:已取消批量作业

4. 检索批量输出

批量预测作业完成后,输出会存储在您创建作业时指定的 Cloud Storage 存储桶中。对于成功的行,模型回答会存储在 response 字段中。否则,错误详细信息会存储在 status 字段中,以进一步检查。

在长时间运行的作业期间,系统会持续将完成的预测导出到指定的输出目标。如果批量预测作业终止,系统会导出所有已完成的行。您只需为已完成的预测付费。

输出示例

成功示例

{
  "status": "",
  "processed_time": "2024-11-01T18:13:16.826+00:00",
  "request": {
    "contents": [
      {
        "parts": [
          {
            "fileData": null,
            "text": "What is the relation between the following video and image samples?"
          },
          {
            "fileData": {
              "fileUri": "gs://cloud-samples-data/generative-ai/video/animals.mp4",
              "mimeType": "video/mp4"
            },
            "text": null
          },
          {
            "fileData": {
              "fileUri": "gs://cloud-samples-data/generative-ai/image/cricket.jpeg",
              "mimeType": "image/jpeg"
            },
            "text": null
          }
        ],
        "role": "user"
      }
    ]
  },
  "response": {
    "candidates": [
      {
        "avgLogprobs": -0.5782725546095107,
        "content": {
          "parts": [
            {
              "text": "This video shows a Google Photos marketing campaign where animals at the Los Angeles Zoo take self-portraits using a modified Google phone housed in a protective case. The image is unrelated."
            }
          ],
          "role": "model"
        },
        "finishReason": "STOP"
      }
    ],
    "modelVersion": "gemini-2.0-flash-001@default",
    "usageMetadata": {
      "candidatesTokenCount": 36,
      "promptTokenCount": 29180,
      "totalTokenCount": 29216
    }
  }
}

失败示例

{
  "status": "Bad Request: {\"error\": {\"code\": 400, \"message\": \"Please use a valid role: user, model.\", \"status\": \"INVALID_ARGUMENT\"}}",
  "processed_time": "2025-07-09T19:57:43.558+00:00",
  "request": {
    "contents": [
      {
        "parts": [
          {
            "text": "Explain how AI works in a few words"
          }
        ],
        "role": "tester"
      }
    ]
  },
  "response": {}
}