{
"contents": [
{
"role": "user",
"parts": {
"text": "Give me a recipe for banana bread."
}
}
],
"system_instruction": {
"parts": [
{
"text": "You are a chef."
}
]
}
}
Cloud Storage 输入
文件格式:JSON 行 (JSONL)
位于 us-central1
为服务账号授予的适当读取权限
针对某些 Gemini 模型的 fileData 限制。
示例输入 (JSONL)
{"request":{"contents": [{"role": "user", "parts": [{"text": "What is the relation between the following video and image samples?"}, {"file_data": {"file_uri": "gs://cloud-samples-data/generative-ai/video/animals.mp4", "mime_type": "video/mp4"}}, {"file_data": {"file_uri": "gs://cloud-samples-data/generative-ai/image/cricket.jpeg", "mime_type": "image/jpeg"}}]}]}}
{"request":{"contents": [{"role": "user", "parts": [{"text": "Describe what is happening in this video."}, {"file_data": {"file_uri": "gs://cloud-samples-data/generative-ai/video/another_video.mov", "mime_type": "video/mov"}}]}]}}
[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-03。"],[],[],null,["# Batch prediction with Gemini\n\n| To see an example of using batch predictions,\n| run the \"Intro to Batch Predictions with the Gemini API\" notebook in one of the following\n| environments:\n|\n| [Open in Colab](https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/gemini/batch-prediction/intro_batch_prediction.ipynb)\n|\n|\n| \\|\n|\n| [Open in Colab Enterprise](https://console.cloud.google.com/vertex-ai/colab/import/https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fgemini%2Fbatch-prediction%2Fintro_batch_prediction.ipynb)\n|\n|\n| \\|\n|\n| [Open\n| in Vertex AI Workbench](https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fgemini%2Fbatch-prediction%2Fintro_batch_prediction.ipynb)\n|\n|\n| \\|\n|\n| [View on GitHub](https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/batch-prediction/intro_batch_prediction.ipynb)\n\nGet asynchronous, high-throughput, and cost-effective inference for your\nlarge-scale data processing needs with Gemini's batch prediction capabilities.\nThis guide will walk you through the value of batch prediction, how it works,\nits limitations, and best practices for optimal results.\n\nWhy use batch prediction?\n-------------------------\n\nIn many real-world scenarios, you don't need an immediate response from a\nlanguage model. Instead, you might have a large dataset of prompts that you need\nto process efficiently and affordably. This is where batch prediction shines.\n\n**Key benefits include:**\n\n- **Cost-Effectiveness:** Batch processing is offered at a 50% discounted rate compared to real-time inference, making it ideal for large-scale, non-urgent tasks.\n- **High rate limits:** Process hundreds of thousands of requests in a single batch with a higher rate limit compared to the real time Gemini API.\n- **Simplified Workflow:** Instead of managing a complex pipeline of individual real-time requests, you can submit a single batch job and retrieve the results once the processing is complete. The service will handle format validation, parallelize requests for concurrent processing, and automatically retry to strive for a high completion rate with **24 hours** turnaround time.\n\nBatch prediction is optimized for **large-scale processing tasks** like:\n\n- **Content Generation:** Generate product descriptions, social media posts, or other creative text in bulk.\n- **Data Annotation and Classification:** Classify user reviews, categorize documents, or perform sentiment analysis on a large corpus of text.\n- **Offline Analysis:** Summarize articles, extract key information from reports, or translate documents at scale.\n\nGemini models that support batch predictions\n--------------------------------------------\n\nThe following base and tuned Gemini models support batch predictions:\n\n- [Gemini 2.5\n Pro](/vertex-ai/generative-ai/docs/models/gemini/2-5-pro)\n- [Gemini 2.5\n Flash](/vertex-ai/generative-ai/docs/models/gemini/2-5-flash)\n- [Gemini 2.5\n Flash-Lite](/vertex-ai/generative-ai/docs/models/gemini/2-5-flash-lite)\n- [Gemini 2.0\n Flash](/vertex-ai/generative-ai/docs/models/gemini/2-0-flash)\n- [Gemini 2.0\n Flash-Lite](/vertex-ai/generative-ai/docs/models/gemini/2-0-flash-lite)\n\nQuotas and limits\n-----------------\n\nWhile batch prediction is powerful, it's important to be aware of the following\nlimitations.\n\n- **Quota**: There are no predefined quota limits on your usage. Instead, batch service provides access to a large, shared pool of resources, dynamically allocated based on availability of resources and real-time demand across all customers of that model. When more customers are active and saturated our capacity, your batch requests may be queued for capacity.\n- **Queue Time**: When our service experiences high traffic, your batch job will queue for capacity. The job will be in queue for up to 72 hours before it expires.\n- **Request Limits**: A single batch job may include up to 200,000 requests. If you are using Cloud Storage as input, there is also a file size limit of 1GB.\n- **Processing Time**: Batch jobs are processed asynchronously and are not designed for real-time applications. Most jobs complete within 24 hours after it starts running (not counting the queue time). After 24 hours, incomplete jobs will be cancelled, and you will only be charged for completed requests.\n- **Unsupported features** : Batch prediction does not support [Context Caching](/vertex-ai/generative-ai/docs/context-cache/context-cache-overview), [RAG](/vertex-ai/generative-ai/docs/rag-engine/rag-overview), or [Global endpoints](/vertex-ai/generative-ai/docs/learn/locations#global-endpoint).\n\n| **Note:** Batch prediction is not a [Covered Service](/vertex-ai/sla) and is excluded from the Service Level Objective (SLO) of any Service Level Agreement (SLA).\n\nBest practices\n--------------\n\nTo get the most out of batch prediction with Gemini, we recommend the following\nbest practices:\n\n- **Combine jobs:** To maximize throughput, combine smaller jobs into one large job, within system limits. For example, submitting one batch job with 200,000 requests will give you better throughput than 1000 jobs with 200 requests each.\n- **Monitor Job Status:** You can monitor job progress using API, SDK, or UI. For more information, see [monitor the job status](/vertex-ai/generative-ai/docs/multimodal/batch-prediction-from-cloud-storage#monitor). If a job fails, check the error messages to diagnose and troubleshoot the issue.\n- **Optimize for Cost:** Take advantage of the cost savings offered by batch processing for any tasks that don't require an immediate response.\n\nWhat's next\n-----------\n\n- [Create a batch job with Cloud Storage](/vertex-ai/generative-ai/docs/multimodal/batch-prediction-from-cloud-storage)\n- [Create a batch job with BigQuery](/vertex-ai/generative-ai/docs/multimodal/batch-prediction-from-bigquery)\n- Learn how to tune a Gemini model in [Overview of model tuning for Gemini](/vertex-ai/generative-ai/docs/models/tune-gemini-overview)\n- Learn more about the [Batch prediction API](/vertex-ai/generative-ai/docs/model-reference/batch-prediction-api)."]]